How to: - Self Tuning My Supercharged 4.0L SOHC V6 | Page 2 | Ford Explorer Forums - Serious Explorations

  • Register Today It's free!

How to: Self Tuning My Supercharged 4.0L SOHC V6

Prefix for threads which are instructional.
Status
Not open for further replies.
MTF modifications (continued)

I made some smoothing adjustments for 250, 275, 300, 450 and 475 MAF AD counts.
MTFPlot4.3.jpg

MTFGridLoMid4.3.jpg

My goal is to get the actual AFR within a few percent of the commanded. Except for a brief period after engine start the PCM will normally be in closed loop and the STFTs will correct for any errors when the MAF AD count is below 500. Accuracy is more important during WOT when the PCM is in open loop.

Tailoring the base fuel table for low to medium airflow MTF tuning variation and resolution worked out so I modified it for medium to high airflow MTF tuning.
BaseFuelTable2KSR04.3.jpg

It will be difficult for me to test the high airflows. My Sport is Toreador Red and the Dynomax VT muffler with 3 in. dia. tailpipe is loud at WOT above 2500 rpm. It probably can be heard for miles. The other day I did a short WOT pull to 5,000 rpm in 3rd speed and in just two minutes I had a local sheriff tailgating me. I'll have to do full performance testing on the dyno but I'd like to get the MTF reasonably close first.

To avoid any WOT related fuel complications I altered the TP for WOT table.
TPforWOT2KSR04.3.jpg

For my Sport the TP for WOT is slightly above 750. I used 650 in this case to see if there are any obvious AFR changes when switching to/from WOT.

The Advantage III description for Aircharge WOT Multiplier states "This MUST be set to 1.9 on all cars, especially on newer models. This will basically limit the airflow that the PCM thinks is going into the engine and cause the engine to run very lean." My stock value is 1.02 but I set it to 1.00 to test the accuracy of the description. The plot below may be an example of what's being described. At tip in (brown = TP) the actual lambda (blue) goes lean while the commanded (purple) is richening. Yellow is engine speed.
TipInLeanA04.3.jpg

In the plot below for a tune I'm paying to have generated the Aircharge WOT Multiplier is set to 1.9 and the actual lambda does not go lean at tip in.
TipInNotLean.jpg

However, in that tune WOT was set at 550 or above which didn't occur until after time equals 131 seconds. Since I didn't notice anything weird happening with WOT set to 650 in my tune I'm going to lower it to 500 and set the Aircharge WOT Multiplier to 1.9 to see if that solves my lean condition at tip in.

The Advantage III description for Correction for Max Aircharge states that "Setting this value to 1.99, like the WOT Aircharge Multiplier, will prevent load from ever being clipped, meaning that load will always be actual load." I changed the .98 stock value to 1.99 even though I haven't noticed any load clipping.

After reviewing another datalog with the revised MTF and Base Fuel table I noticed that there is a significant increase in actual lambda when accelerating vs stable engine load for any given MAF AD count. During normal operation the PCM will be open loop when the specified TP values in the Fuel Open Loop TP table are exceeded. To avoid detonation it is important that the actual lambda is not leaner than the commanded lambda so I corrected the MTF using the acceleration data. I found that I needed to increase the MTF lbs flow about 5% from 300 to 550 MAF AD counts and decrease the MTF lbs flow about 5% at 850 MAF AD counts and above.

Click here to post comment on the discussion thread
 



Join the Elite Explorers for $20 each year.
Elite Explorer members see no advertisements, no banner ads, no double underlined links,.
Add an avatar, upload photo attachments, and more!
.





MTF modifications (continued)

Here's a plot of the entire MTF after modifications.
MTFPlot4.4.jpg

The vertical scale is in #mass /tic times 10 billion. I have no data for MAF AD counts greater than 850 so they are estimated. I have a small utility trailer that I use to haul mulch. It holds 1 cubic yard and is rated for 900 lbs. I plan to get a load of mulch in a few days and do some hill climbs pulling the trailer.
 






Altering Load W/failed MAF table

I've read that the Load W/failed MAF table provides an accelerator pump type function in some vehicles to reduce a lean condition at tip in. My stock table is shown below.
LoadWFailedMAF.jpg

The stock TP range was adequate but I decided to expand the engine speed range to the minimum and maximum that could occur. That way the PCM can interpolate for the actual value. Changing the engine range is done by modifying the Xnorm MAF 01 table.
XnormMAF014.4.jpg

For stock the PCM limits the engine speed to 6250 rpm. I did not change that because there are no engine internal modifications. With stock internals the power decreases rapidly above 6,000 rpm. The load values in the table should be based on datalogs after the MTF is stabilized. For now I just multiplied the stock entries by a factor of 1.30.
LoadWFailedMAF4.4.jpg
 






Tip In Spark adjustments

The spark advance is significantly reduced at tip in even though the Amount to Reduce Spark for Tip In Cntrl table has 0 entered for all loads and engine speeds.
TipInSpark4.3.jpg

According to Spark Source the amount of advance is controlled by the Borderline Knock Table and should be 14.5 degrees at load = .9 and rpm = 3770. However, Knock Sensor Retard is reducing that 5.75 degrees leaving only 8.75 degrees advance. But the actual is only 4 degrees advance. That means some other adder or multiplier is being applied. There is a Tip In Spark Retard Table for Change in AM (air mass). The Advantage III description states "This table should be set to zero to disable tip in spark retard" so I'm going to try doing that.
 






Wot afr

After the last changes the WOT AFR looks pretty good.
AFRWOT4.4.jpg

For MAF AD counts from 675 (487 secs) to 800 (494 secs) the actual lambda is slightly richer than the commanded which is what I wanted for safety reasons. Also, leaning during rapid TP increase has almost been eliminated. I 'll lean the MTF for MAF AD counts from 400 (482 secs) to 525 (484 secs) and at 625 (486 secs).

After making several minor adjustments for MAF AD counts below 600 I decided the MTF was good enough to 700. I modified the TP for Open Loop table to allow closed loop up to 300 relative (max MAF AD count of about 700).
TPforOpenLoop5.2.jpg

I also switched from my AFR testing Base Fuel Table to my planned conservative Base Fuel Table.
BaseFuelTable2KSR02.1.jpg

After allowing the PCM to operate closed loop for TPs up to 300 and RPMs up to 3500 the commanded lambda in open loop no longer matches the entries in the Base Fuel Table. In the instance below when the PCM goes from closed to open loop at 435.8 seconds (1300 rpm), the STFT1 (commanded lambda) decreases from 1.16 to 1.11 at 437.8 seconds (3400 rpm) when PCM returns to closed loop.
WeirdSTFT5.2.jpg

According to Base Fuel Table the commanded lambda should be .843 decreasing to about .814.
WOT Fuel Multiplier = 1 for all engine speeds
Open Loop Fuel Multiplier = 1
Aircharge WOT Multiplier = 1.899
Engine Displacement = 75% or 125% of stock
Open Loop A/F Enleanment Multiplier = 1
Adaptive Control Switch = 0
Adaptive WOT switch = 0
The commanded lambda is so lean that detonation would be probable at WOT and high engine speeds. After experimenting I found that the problem is the PCM is too slow in making the transition from closed to open loop. Changing the Open Loop Delay Blending Ramp from the stock value of .0249 to .7 reduced the transition to an acceptable time of .4 seconds but revealed another oddity. When the TP is greater than the value for open loop and less than the value for WOT the command lambda matches the entries in the Base Fuel Table (BFT). However, when the TP exceeds the value for WOT the commanded lambda is significantly richer than the value in the BFT. I tried changing every available calibration constant remotely related to the error with no success. However, since I really have no need for WOT TP to be defined I can set it to 775 and never reach it is since the actual value is 756. That way whenever the TP exceeds the open loop TP the commanded lambda will match the values in the BFT.

Unfortunately, setting the WOT TP to 775 didn't prevent the STFT (commanded lambda) from being richer than the value in the BFT. I finally determined that having the Switch to Force Open Loop being equal to the stock value of 1 was the cause of the problem. With it set to 0 the STFT when the PCM is in open loop matches the BFT value throughout the entire engine operating range.

Within a 20 minute drive radius from my house the maximum posted speed limit is only 50 mph so I can legally only do first speed pulls to the engine rev limit of 6250 rpm (53.5 mph). After making some minor MTF changes the actual lambda is 1 or 2 % richer than the commanded down to a lambda of .70 (10.2:1). I think the AFR is ready for some 3rd speed or 4th speed pulls on the dyno.

Click here to post comment on the discussion thread
 






Tip in spark retard

Changing all entries to 0 in the Tip In Spark Retard Table for Change in AM (air mass) made no detectable improvement in tip in spark retard.
TipInSpark4.4.jpg

The knock sensor retard (red) steps down from 1 degree advance (485 secs) to 4.75 degrees retard (489 secs). According to the Borderline Knock Table for a load of .90 and engine speed of 3685 rpm (490 secs) the spark should be 14.4 - 4.75 = 9.95 instead of 4.25 degrees.

There are two tables associated with spark retard for ECT:
SparkRetardECT.jpg

To determine the impact the appropriate value in the first table is multiplied by the appropriate value in the second table. However, since the ECT was only 196 the appropriate value in the first table is 0 so the impact is 0.

There are also two tables associated with spark retard for air charge temperature (ACT) which is equivalent to intake air temperature (IAT):
SparkRetardACT.jpg

To determine the impact the appropriate value in the first table is multiplied by the appropriate value in the second table. The ACT was 134 so the interpolated appropriate value in the first table is -52.5 times .0898 equals -4.7 degrees. 9.95 - 4.7 = 5.25 which is 1 degree more than measured. If the value in the first table is not interpolated and -64 is used the result is -64 * .0898 = -5.75 from 9.95 = 4.2 degrees which is approximately equal to the measured value. This indicates that the PCM did not interpolate ACT in the first table and used the next higher value instead.

The purpose of the above spark retard tables is to prevent detonation. In the stock configuration the IAT sensor was integrated with the MAF sensor and located a considerable distance from the head intake ports. In my M90 configuration the IAT sensor is located in the intake manifold between the blower outlet and the head intake ports. Even though my knock sensor is very active (which I'll deal with soon) I have heard no sounds of detonation and my spark plugs show no indications. Therefore I feel comfortable in making the ACT spark retard tables less conservative.

I'm disappointed that the PCM apparently does not interpolate the ACT index in the table. So far the greatest IAT I have logged with my unpressurized intercooler is 164 degrees. I'll generate a new table with the best resolution near that temperature. First I modified the normalizers except for the load index.
SparkRetardActNorms5.0.jpg

Then I modified the tables.
SparkRetardACT5.0.jpg

The ACT multiplier is as close to .1 as Advantage III would allow making it easy to perform the multiplication. The overall ACT retard ranges from 0 to 8 degrees.

I also changed the ECT spark retard tables setting the multiplier equal to .1 for simplicity.
SparkRetardECT5.0.jpg

The maximum ECT I recorded when establishing my normally aspirated baseline was 214 degrees after multiple dyno pulls on a pretty hot day.

I changed the Minimum Vehicle Speed for Tip In Torque Control from the stock value of 10 to the maximum of 127.
 






Knock sensor tuning

I increased the knock sensor maximum engine speed to retard timing from the stock 6000 rpm to the engine speed limiter setting of 6250 rpm.
The PCM uses the firing order to correlate knock sensor activity with each cylinder in the firing order. There is a knock sensor threshold calibration constant for each cylinder. The Advantage III description states that a smaller number in the table decreases the knock sensor sensitivity for that cylinder. According to a late 1990s Ford strategy source code listing:

"Knock occurs if [B * (threshold factor)]/256 > A

Where A = noise average & B = scaled knock window A/D reading
A threshold factor of 100 allows knock to be detected if the signal-to-noise is approximately 2.5 to 1. "

There is also the capability to change the overall sensitivity of the knock sensor that would affect all cylinders. According to the Ford strategy source code listing:

"Table Value ---- Sensor Gain
____ 0 __________ 4.00
____ 1 __________ 5.66
____ 2 __________ 8.00
____ 3 _________ 11.31
____ 4 _________ 16.00
____ 5 _________ 22.63
____ 6 _________ 32.00
____ 7 _________ 45.25"

I assume that when detonation actually occurs it will affect more than one cylinder and I want to retain the detonation detection capability. I'm hoping that only one (or maybe two) cylinders are generating a noise that the sensor is detecting. I will attempt to identify the noisy cylinder(s) by decreasing the values in the threshold table for one or two cylinders for each tune. I started with cylinder 0 in the firing order (cylinder 1) and decreased all of the values to half of the stock values.
KnockSensorThresCyl05.0.jpg

The only significant change I noticed was the reduction of knock sensor advance from +4.0 degrees to +.5 degrees. This reminded me that I don't want the PCM advancing the spark based on no detection of knock so I entered 0 in all positions of the Knock Sensor Advance Limit table. I modified the threshold table for cylinder 1 in the firing order in the same manner as for cylinder 0.
There was no significant reduction in knock sensor retard after modifying the threshold table for cylinder 1 so I made the same modifications to the tables for cylinders 2 and 3 in the firing order. I was pleased that after changing the thresholds for cylinders 2 and 3 in the firing order there was no knock sensor retard in the entire datalog that included two short WOT bursts.
SparkRetard5.2.jpg

Red = knock sensor retard, Brown = TP Relative
I'll restore the original thresholds one cylinder at a time to isolate to the noisy cylinder knock sensor time window.
However, there is still significant spark retard at tip in.
Restoring the original thresholds for cylinder 0 brought back knock sensor retard so I know there are at least two noisy time windows. I'll reduce the stock thresholds 25% for cylinder 0 to see what happens.
 






Altering engine displacement

Decreasing the engine displacement to 75% of stock resulted in severe spark retard at tip in.
SparkSource3WOT5.2.jpg

Spark Source = 3, Torque Control. The torque source for that instance was 5, Tip-in Shock Control. Adding an M90 supercharger makes the engine perform like a larger displacement engine so I'll try increasing the displacement to 125% of stock.

Increasing the engine displacement to 125% of stock resulted in significant improvement of tip in spark.
TipInSpark5.3.jpg

In two drives there were no instances of Tip-in Shock Control. However, there were two instances of torque reduction for Torque Shift Modulation, Transmission during an upshift after letting of the throttle which seems appropriate.
Increasing engine displacement reduces computed load since the MAF measured air charge divided by the increased maximum possible air charge will be a smaller number. Normally the maximum load for a normally aspirated engine will be less than one and the maximum load for a forced induction engine will be greater than one. Computed load is changed when the MAF Transfer Function is altered and when the Engine Displacement is altered. With my current MTF and 125% engine displacement I estimate my computed maximum load will be between .85 and .90. That means I may not have to significantly alter stock load dependent tables. I've decided to use 125% engine displacement for now to reduce the amount of datalogging.
 






Load w/Failed MAF table

I started extracting the latest datalogged load values and associated engine speed and TP to compare with the stock and the revised Load w/Failed MAF Tables. With the engine displacement at 125% of stock the revised table load values (stock loads multiplied by 1.3) are significantly larger than those now computed by the PCM. The stock values are much closer to the PCM computed values for loads up to .65 so I'm reverting to the stock table. I'll replace the stock values with datalog values as they become available.
I have not yet been able to determine what impact (if any) the Load w/Failed MAF table has on tip in AFR.
 






Spark tuning - MBT

Now that the AFR is close to what I want and I have a way to control the knock sensor retard I'm ready to start spark tuning. According to what I've read the best engine performance is realized when the spark is set to achieve the maximum torque for all engine speeds. According to Wikipedia: "

Maximum Brake Torque (MBT) is the use of optimal ignition timing to take advantage of an internal combustion engine's maximum power and efficiency. There is always an optimal spark timing for all operating conditions of an engine. MBT is most ideal at wide-open throttle (WOT), but not desirable when the engine is at idle. Although MBT is desired at WOT, it is wise to retard timing slightly to prevent knocking that may occur and to create a small safety margin. It is possible to calculate the MBT of an engine by taking into account all of the operating conditions of an engine through its sensors. Operating conditions are defined by these engine parameters: lambda, engine load, internal exhaust gas recirculation, engine speed, and spark advance."

There are complex computations (well beyond my ability) to determine MBT spark advance for a particular engine configuration. There are also ways to modify an engine with instrumentation (again beyond my ability) and measure the MBT. A simpler (but practical method) is to strap a vehicle to a dynamometer and measure torque at various engine speeds as the spark advance is changed. Since I have to pay by the hour for dyno time I simply used the stock MBT table (with some minor modifications) hoping that Ford had performed the needed testing to generate the table. I changed the spark values so all of the load plots converged to the same advance at minimum engine speed.
MBTSparkTable6.0.jpg

I also made a few changes in a couple of the plots to smooth the curves.
MBTSparkGraph6.0.jpg


For my PCM strategy the MBT values are not used by the PCM to control engine timing but are used for torque calculations that impact transmission characteristics.
I did not alter the stock values for the Spark Modifier for MBT Based on A/F Ratio table.
 






Max Allowed Spark Table

One of the spark tables that the PCM actually utilizes to control spark is the Max Allowed Spark Table. Since theoretically MBT spark provides the best engine performance I decided to copy the MBT values into the Max Allowed Spark Table.
MaxSparkTable6.0.jpg

Unfortunately, the MBT table has a resolution of .25 degrees and the Max Allowed table only has a resolution of .5 degrees. When I could not use the exact same value I rounded up (i.e. 21.25 > 21.5 deg).
MaxSparkGraph6.0.jpg

The load plots show the smoothness deterioration due to the larger resolution value.
The Wikipedia comment about MBT spark not being desirable for idle slightly concerns me but there are special idle calibration constants that may avoid any problems.
The Spark Adder to Max for A/F when Open Loop table allows the PCM to increase or decrease spark for various combinations of engine speed and lambda. For now I'm setting all entries to 0.
I did not alter the stock values for the Max Spark to Limit Combustion Pressure table.
I did not alter the stock values for the Max Spark when at Low Load table.
 






Borderline Knock Table

Another spark table that the PCM utilizes to control spark is the Borderline Knock Table. If there was no possibility of detonation this table would be almost identical to the Max Allowed Spark Table. With fairly high compression, boost, and possibly high intake air temperatures detonation is a real possibility even with 93 octane gasoline. As a starting point I copied the MBT values into the Borderline Knock table which only has .5 degree resolution. However, this time I rounded down when I couldn't use the exact value (i.e. 22.75 > 22.5 deg). I did this because my goal is to have the spark controlled exclusively by the Borderline Knock table during cruise and acceleration. I have confirmed thru datalogging of test spark tables that my PCM strategy always utilizes the lowest spark advance when comparing the Borderline and Max Allowed spark tables. Some tuners set the Max Allowed Spark values all to 50 or 60 but I like having realistic values in case of an error in the Borderline Knock table.
Next I reduced the spark advance proportionally for high loads and high engine speeds to reduce the possibility of detonation.
BorderlineSparkTable6.0.jpg

BorderlineSparkGraph6.0.jpg

I did not alter the stock values for the Spark Adder for A/F when Open Loop since all entries were 0.
I did not alter the stock values for the Spark Adder After Load Increase since all entries were 0.
The Spark Retard ACT Multiplier and the Spark Retard for ACT tables were changed as described in post 26.
The Spark Retard ECT Multiplier and the Spark Retard for ECT tables were changed as described in post 26.

A considerable amount of dyno time can be expended attempting to increase the spark advance as close to MBT as possible while avoiding detonation. Since my Sport is only for street use I don't plan to optimize my spark timing.

Click here to post comment on the discussion thread
 






Performance Shift Schedule

As soon as I started to alter the upshift points I realized I still had a need to define a usable WOT. So I reduced the WOT values from 775 to 700. The upshift values are:

------- Upshift ------------- Stock -- Revised
Trans WOT Shift RPM 12 -- 5500 ---- 5850
Trans WOT Shift RPM 23 -- 5500 ---- 5800
Trans WOT Shift RPM 34 -- 5800 ---- 5900
Trans WOT Shift RPM 45 -- 5800 ---- 5850

The 3 to 4 shift has the highest engine speed because of the wider spacing between 3rd and 4th speed.
Gears640.jpg

rear axle 3.73:1
1st speed 2.47:1
2nd speed 1.86:1 (2.47 * .75)
3rd speed 1.47:1
4th speed 1.00:1
5th speed 0.75:1

According to my NA dyno testing power drops rapidly above 6000 rpm.
NAHPBase2640.jpg

I assume the same results with FI so I left the rpm limiter at the stock setting of 6250. I may need to quicken the shift to prevent the engine speed from hitting the rpm limit during upshifts.
 






FI dyno results - shift points revised

Using the torque results of my first forced induction dyno test session and an algorithm I created I revised my WOT shift points as follows:

------- Upshift ------------- Stock -- Revised
Trans WOT Shift RPM 12 -- 5500 ---- 6000
Trans WOT Shift RPM 23 -- 5500 ---- 5900
Trans WOT Shift RPM 34 -- 5800 ---- 6200
Trans WOT Shift RPM 45 -- 5800 ---- 6000

This morning I logged a WOT 1st to 2nd shift and found that the peak engine speed during the shift was only 5952 rpm. I will try to determine why.
 






axle scalars

I changed the axle scalar Final Drive Ratio from the stock 3.77 to my actual 3.73. My BFG P235/75R15 Long Trail Radial T/A Tour tires have a diameter of 28.9 inches. I changed the axle scalar Tire Revs Per Mile from the stock 800 to my actual 721.
 






FI dyno results - MTF load impact

I noticed during a WOT pull in 4th on the dyno that the engine faltered at 4900 rpm so I immediately terminated the pull. The datalog showed that the spark was too advanced. The reason was that I changed the mass transfer function at high airflows to correct the commanded vs actual lambda which reduced the calculated load. The spark advance in the Borderline Spark table is greater at lower loads than at max load. So I reduced the engine displacement from 125% back to the stock value which now results in a max load of about .9. This may adversely impact tip-in spark retard but will definitely impact the Load w/Failed MAF table.

Edit: I realized that the actual reason the engine faltered was because I had forgotten to disable the vehicle speed limiter (about 103 mph). To disable the vehicle speed limiter I changed the Speed Limit Min Torque Ratio from 0 to 1.
SpeedLimiter.jpg
 






tip-in blend

It was taking too long for the PCM to change lambda when switching from closed to open loop at throttle tip-in.
TipInSTFT5.0.jpg

In the above instance the time exceeded 3 seconds. I learned that the scalar Open Loop Delay Blending Ramp determines the length of time so I changed the stock value of .025 to 1.00 which reduced the transition time to about .2 seconds.
TipInBlend6.4.jpg

Green represents the open loop flag.
 






Status
Not open for further replies.
Back
Top