the default number of events displayed in show source are 25,50,100,200,500,1000.
Can i change it so that i can see all the events in my source file,there are nearly 7000 events in my source file. i tried changing the static options defined in Show_source view. but is of no change, can somebody help me.
You can change the show_source view's XML, so as to add higher row numbers like 10000, and in some cases such a change will result in 10,000 or more rows being displayed. Technically it will display 10,000 rows before the selected event and 10,000 rows after, for a total event count of 20,001.
However I do not think this is generally possible for all distributions of events in time. I can reproduce cases where it will indeed render 20,001 events. However I can also find lots of cases where it gives up before reaching those numbers. Above about 1000, Show source will not necessarily go and get all the 10,000 rows that the user has asked for, because it may find the request difficult to fulfill and it may assume from that that there are no more events to get when in fact there are.
Long version: Show source in the Splunk Search app is implemented using an obscure argument in the REST API, and one that is undocumented. You can look into ShowSource.py, or just read the splunkd_access log to see the requests that get made back to splunkd to fulfill show-source requests. There is an "surrounding=1" argument that gets passed, even though such an argument is not documented in the official rest api docs.
Note the surrounding=1 argument in the splunkd_access log snippet posted above. surrounding=1 tells the API that instead of getting the events for this search result, it should go to offset=90 of the current search result, get that one event, get the 'source' and 'host' field values of that event, and then do a separate search in the index for other events that have that source and host, that are nearby in time. Starting from that point and going backward in time to get the events is a piece of cake for Splunk, but starting from that point and going forward in a historical sense, is a bit harder. And I think in this implementation if it finds itself struggling, it allows itself to give up before it's actually reached the 10,000 mark or whatever you've set for it.