| Mail |
You might also like: WoW Insider, Joystiq, and more

Reader Comments (25)

Posted: Mar 16th 2010 5:44PM (Unverified) said

  • 2 hearts
  • Report
There are no tricky implementation constraints here, we just want to make it easy for residents managing scripted objects: if you set up 10 scripts that each use a maximum of 64K, they will still be running in a week, if you set up 10 scripts that each use some memory dynamically they might be running in a week, or one may have used up all the available memory and stopped the others from working.

The LSL runtime is not always the best option for resource usage. Simple Mono scripts will be able to reserve less than 16KB and so be more attractive than LSL scripts to residents. Complex Mono scripts are able to use 64KB where they need to. The same scripts written in LSL would have to be split across 2 or more scripts communicating via link messages which would, themselves, add to the memory usage. There will be some scripts that just happen to need around 16KB of memory that may be more efficient to script in LSL due to its smaller overheads for lists and recursion, but these will be in the minority.

Posted: Mar 16th 2010 10:43PM (Unverified) said

  • 2 hearts
  • Report
We assumed "tricky implementation constraints" were what was meant when it was said that it just wasn't feasible to do, and that we'd have to settle for a settable preallocation scheme instead.
Reply

Posted: Mar 16th 2010 11:00PM (Unverified) said

  • 2 hearts
  • Report
Also, I see that you're implying that scripts compiled for the LSL runtime engine won't be getting the new allocation facility, and will have their memory allocation locked at 16K.

(You know, would have been far more efficient to get this information as a part of our queries to the Lab about all of this last week. I'm sure someone should think about that)
Reply

Posted: Mar 16th 2010 11:35PM (Unverified) said

  • 2 hearts
  • Report
It's nice, the way the Lab communicates so freely with people trying to get answers from them...

Yes, that was sarcasm. I used to be a fan of LL, even when SL was still struggling a bit with the tech at the time, I put up with it. However, they are quickly losing transparency and are starting to sound more and more like they've just given up trying to talk to the very community that they started. Perhaps if you explained the issues with a bit more clarity, members of that same community might actually be able to offer suggestions on how to correct tech issues.

Of course, that's too easy, right?
Reply

Posted: Mar 16th 2010 7:17PM (Unverified) said

  • 2 hearts
  • Report
Babbage are you saying there is a limit of just 10 mono scripts to some certain parcel size or 10 to an avatar? I hope not. We need tools to evaluate our needs against your arbitrary decisions on what we pay for will be so we can evaluate whether or not to continue paying you $300 a month for a simulator on a web host. My sim runs fine right now. If you cripple my sim then decisions have to be made about how best to proceed with the damages.

It is time for Linden Research Inc. to explain what these limits will be.

Posted: Mar 17th 2010 4:13AM (Unverified) said

  • 2 hearts
  • Report
Hang on... it seems that several things are being conflated here.

First, there's the issue of memory allocated for data for a given script. If script writers, as they should, go to the trouble of analyzing their memory usage and call the new function, there shouldn't be an advantage in that regard to compiling to LSL bytecode versus CIL (the bytecode for C#, which Mono's C# compiler generates; since you mention C# as a future choice, I take it LL has written an LSL front end that generates CIL). Yes, for existing scripts that don't use that function, there's the potential of wasted space (though 64K is four times 16K, not eight).

Second, there's the issue of the quality of code generated by the front end. You say "Given two identical scripts, the one compiled for Mono uses more memory in code than the one compiled for LSL." I would say that means that the LSL-to-CIL front end needs work--and I'd expect that C# compilers have had a lot of work put into optimization, so that once C# is an option, the balance may shift.

Finally, there's the issue of time as well as space. I'm sure CIL wins in that race, though in practice, I would expect that scripts spend most of their time waiting for library calls to return--and if Gwyneth Llewelyn's "No more limits!" blog post is correct, LL deliberately slows some of them, which wipes out some of that advantage.

Posted: Mar 17th 2010 2:02AM (Unverified) said

  • 2 hearts
  • Report
The 64k limitation for mono scripts usually means you end up with more then 16k worth of data storage available. Unless you are making godawful complex scripts mono comes as a welcome relief both performance and storage wise.

You also have to figure in that mono scripts are only allocated memory as needed, whereas LSL wastes 16+k right off the bat. They upped the memory limit for mono scripts because it isn't _always_ used; given a thousand scripts in mono and lsl bytecode you would likely use less memory with mono. Also byte size != performance increase; mono is much, much faster for most tasks.

Posted: Mar 18th 2010 10:51PM (Unverified) said

  • 2 hearts
  • Report
whether they are using it or not, currently mono scripts always take about 64k each (minus the shared bytecode thing)
Reply

Posted: Mar 17th 2010 4:13AM (Unverified) said

  • 2 hearts
  • Report
Babbage: "There will be some scripts that just happen to need around 16KB of memory that may be more efficient to script in LSL due to its smaller overheads for lists and recursion, but these will be in the minority."

Actually I suspect that there will be more than a few scripts that will be better off using LSL. When mono first went to the beta grid I was excited to try to combine my primary products two scripts into a single mono script. Even after removing all of the communications code I was unable to get the combined script to compile in the 64k limit. After doing a number of tests on various scripts, of various sizes, I found that the mono scripts were taking about 2.4 to 2.5 times the memory for the same code. Based on this ratio any LSL script that currently uses more than about 6k to 7k of memory, will end up with a smaller memory footprint by using LSL.

I waited for over a year after mono was released to the main grid to do my conversions. In some of the script limit discussions at the time, this issu was covered and the statement was made that to keep LSL and mono on an even playing field, that LSL scripts were going to be charged '64k' as far as memory limits were concerned. Based on this I went ahead and spent a month converting all of my products, and inventory, and sending out updates to thousands of customers.

Now it appears that I will be forced to convert back.

sakkaku: The dynamic memory is not being used or is not going to be displayd as far as the memory limits are concerned. The current plan seems to be to create a hard memory limit per script. Mono apparently will at some time in the future, be able to reduce their memory usage, but that will only help with small scripts. Speed is fine, but how many scripts really run constantly doing major work that needs speed, and how many are like vendors, that just sit quietly waiting for click, then run 20 or 30 instructions, then just wait again?

The real issue will become, how will this memory usage limit affect the customers. My efficient mono scripts will use 128k, or when the limits are in place 80k to 90k. Or I can recompile tham back to LSL where the two wil take a total of 32k. Now place yourself in the position of a customer who has to fit these objects into your parcel memory limits? Which one wil you buy, the mono, at 128k or hopefully 80k or the LSL at 32k? They both do the same thing, they both have the exact same features, they both run close enough to the same speed that you cannot tell the difference.

There are some features that mono was supposed to have such as sharing code memory so that multiple copies of the same script would share memory for more efficiency, but if that exists it won't be visible to the end user.

Performance, efficiency, speed, and all of the bells and whistles that mono supposedly has are nice sounding, but they don't mean a thing when it comes to the final issue of memory footprint. Many of the features of mono have not been implemented or if they have, are hidden in the background, and are not available for us to evaluate how they really affect our scripts. In the end, the same thing that drove mega prims will drive reverting to LSL. The ability to get more in the same space.

Posted: Mar 17th 2010 6:16AM (Unverified) said

  • 2 hearts
  • Report
Tateru, yes, the LSL runtime won't be able to changed its reserved memory size: the VM relies on LSL scripts being a single 16K block.

Posted: Mar 17th 2010 6:17AM (Unverified) said

  • 2 hearts
  • Report
There are lots more details about the 2 VMs and our plans for the future, including script limits, in my FOSDEM talk, which is online here: http://www.youtube.com/watch?v=QGneU76KuSY

Posted: Mar 17th 2010 12:53PM (Unverified) said

  • 2 hearts
  • Report
This is why SL is awesome.

Posted: Mar 17th 2010 10:22PM (Unverified) said

  • 2 hearts
  • Report
I'm surprised Babbage didn't explain that Mono will allow for specifying scripts use less than the full 64k in the coming future. This was revealed in Kelly Linden's recent blog post (comments section):
https://blogs.secondlife.com/community/technology/blog/2010/03/05/server-138-beta-now-open

Important section as follows: "Right now there is no way to change how much memory a mono script uses, and it is true that at any given point it probably uses less than 64k, by some amount. However, before we enforce script limits, which again is still a ways off, we will enable the ability to set a lower max memory size for mono scripts. If your script really only uses 4k, congrats you can set it to that and it will only count as 4k, and you won't be able to use more until you change it. This will be implemented before we enforce limits. By doing it this way the scripter can be in control. By making scripts request the memory size change the simulator can deny that request and let the script deal in its own way with there not being enough memory - all without us having to randomly decide on which objects to return."

So this is all a non-issue. If your script uses 4k, you can mark it as using 4k, it will display as using 4k and it will only be able to use 4k. Pretty simple.

Posted: Mar 17th 2010 10:54PM (Unverified) said

  • 2 hearts
  • Report
The issue here is not as simple as just specifying 4k. That will work fine for some scripts, but only for scripts that are currently consuming less than about 6k in LSL to start with. I base my numbers on an average increase in size of about 250 percent. I double checked those figured today. I went back to the mailing list when mono was in beta, and found where Babbage even stated that some scripts can increase 3.5x or more. At that time I even found that one of mine went up 5.8x.

Assume you have a script that consumes 4k in LSL, that will show 16k in the viewer when you look at it. When you recompile that one in mono, it will take about 10k (again assuming 2.5x). You can then set (eventually) the limits down to about 10k and everything is great.

Now, try that with a script that consumes about 8k in LSL. That 16k image size used in LSL wil then become 24k in mono.

In my door scripts, that will become a minimal size in mono of about 80k, after they implement the memory size, and 128k before that is added. However if I recompile them back to LSL they will only take 32k.

I agree that mono is much better use. I am looking forward to the C# and I understand how much better this will be for the server side, but place yourself in the shoes of a landowner. You have X amount of memory to work with along with your prim limits. Which would you buy? 32k or 80k for the exact same thing?
Reply

Posted: Mar 17th 2010 11:18PM (Unverified) said

  • 2 hearts
  • Report
"I'm surprised Babbage didn't explain that Mono will allow for specifying scripts use less than the full 64k in the coming future."

He didn't have to. It was one of the key points we mentioned in the article, though at the time we were of the understanding that both the Mono runtime and the LSL runtime would have this facility. If I read Babbage correctly, however, only the Mono runtime will get this facility.
Reply

Posted: Mar 18th 2010 11:05PM (Unverified) said

  • 2 hearts
  • Report
btw, i remember being told a couple of times that mono scripts had to use more memory because they stored characters with UTF16, what if we had an option to have the script be stored as UTF8 or even ASCII?

Posted: Mar 19th 2010 5:36PM (Unverified) said

  • 2 hearts
  • Report
"He didn't have to. It was one of the key points we mentioned in the article"

Funny, even now, I still don't see it in the article, anwhere. :) At least, not in a very clear, concise, and easy to understand form. The closest I see is this:

"That preallocated amount will be able to be adjusted (though still no more than 16K for LSL or 64K for Mono) with a new function, where the script author can determine how much of that memory the script will use, and set the preallocation amount to just that. As usual, the script will crash if the preallocated amount is exceeded."

which is as clear as mud, and certainly not Bulleted as a key point. But I'm probably being picky. Curse of the technical minded.

Some very good points Innes. Certainly adds some food for thought.

Posted: Mar 21st 2010 8:02PM (Unverified) said

  • 2 hearts
  • Report
After reading this and the comments. I have a few questions. First, if script limits are going to be put in place, what will happen to role play regions that use many scripts to make environments lively? (i.e. the creepy atmosphere of Doomed Ship)

Second, I've seen truly phenomenal objects and systems in Second Life use lots of scripts, will this essentially put them out of business with the limits, ultimately hurting the economy of Second Life?

Third, some furry avatars happen to use a quite a few scripts, will the limit be so severe as to actually stop these avatars from functioning properly?

It's already bad enough that soon I won't be able to afford to keep my stuff on XStreet soon, don't limit my Second Life experience anymore by these limits! D:

Posted: Mar 21st 2010 11:45PM (Unverified) said

  • 2 hearts
  • Report
Well, groups of scripts running within the memory limits would be unaffected, as we understand it. Those that operate outside the limits would fail. Exactly where those limits are doesn't seem to have been determined yet -- that still seems to be a work in progress.

So on the face of it, it appears there won't be *too* much change, except that the limit will be slightly lower than it is now, and you'll know what it is (Right now, you have to guess what it is).
Reply

Posted: Mar 21st 2010 11:37PM (Unverified) said

  • 2 hearts
  • Report
What I don't quite understand is how memory strictures will completely help performance. I mean, yeah, memory usage probably has an impact on the server. But surely abusing timers, loops, heavy math, or just plain old sloppy scripting impacts performance a lot more, even with the script scheduling. What they should really have is a code profiler available. One that views ALL scripts in a linkset as part of a "project" and can offer guidance to scripters. Too many scripts in there when you could tell a child prim what do from the root? Tell people. An infinite loop that never exits? Should probably flash a red light.

Featured Stories

WRUP: Expanshapaign is too a word

Posted on Dec 20th 2014 10:00AM

Engadget

Engadget

Joystiq

Joystiq

WoW Insider

WoW

TUAW

TUAW