Why is there an initialization phase with a complete unloading and loading phase of the battery?
HOMER uses economics to operate the dispatchable power sources. In your system, you have two dispatchable sources: the battery bank and the grid. The price of grid power is simple: $0.10/kWh. The price of battery power is a little more complicated: it's the sum of the battery wear cost (the cost per kWh of throughput resulting from the shortening of the battery lifetime by cycling energy through the battery bank) and the battery energy cost (the cost of the energy you put in the battery bank). In other words, HOMER accounts for the cost of the energy in the batteries, and the fact that the more you use the batteries the faster you kill them.
Your battery wear cost is $0.025/kWh -- you can see the formula for that in the help file if you look up "battery wear cost" in the index. HOMER calculates that before the simulation starts. But it can't do the same with the battery energy cost. Instead, it calculates that as the simulation proceeds. At the start of the simulation the battery energy cost is zero, but as the simulation proceeds HOMER tracks the total amount of energy you have put into the battery bank and what it cost. (It doesn't cost anything to put PV power into the battery bank, but it costs something to put grid power into the battery bank.) It divides the two to find the average battery energy cost. You can plot the battery energy cost when you are looking at hourly data.
In your system, the battery energy cost settles to about $0.12/kWh, so the total cost of battery power is about $0.145/kWh. Since that's more than the grid, HOMER always prefers the grid. But in those first few hours of the year, before you have put any energy into the battery, HOMER assumes the battery energy cost is zero, making battery power cheaper than the grid. So it drains the batteries, then charges them with grid power, then realizes that battery power is more expensive than grid power, so it never uses the batteries again.