h/t Nick I have iterated on your idea. It stripped the decimals nicely but kept the dot when "6556.000" so I added \d. | rex field=alert_value "^(?<myfield>[\s\S]*\.\d[\s\S]*?)0*$" In my case, my field also contains integers: | rex field=alert_value "^(?<keep>[^\.]+)(?<keepdot>\.{0,1})(?<keepdotdecimal>\d*?)0*$"
| eval human_value = keep . if(len(keepdotdecimal)!=0, "." . keepdotdecimal, "") It caters for "6556" and "6,556"
... View more
I've read the accepted answer, and it doesn't satisfy my question. The question is best answered by Splunk technical team, with insight into why 'delete' was built to hide/mask data, but not actually 'delete' it and free up space. The 'delete' command is inaccurately and poorly named.
... View more
You could write a macro that does the reporting and then invoke it along with your selection criteria. For example,
imagine that your macro is named av_summary and contains something like this
sourcetype=av* plus other search terms
| cool transformations here
| stats count by virus sublocation location country
| other cool reporting or charting
You could invoke the macro like this in the search bar
region="Europe" `av_summary`
You could even save a search for each region. But since the underlying macro would be shared, you would have only one place to update the actual report.
It's easy to create a macro, just go to Manager>>Advanced Search>>Macros
... View more
Not really, because the deployment server only handles apps. It is no general replication mechanism.
This docs section covers failover and other high availability techniques pretty well: http://docs.splunk.com/Documentation/Splunk/latest/Installation/Highavailabilityreferencearchitecture
... View more
Ayn, I combined this with your other answer
http://splunk-base.splunk.com/answer_link/41401/
about getting readable times, to get
| eval indextime=strftime(_indextime,"%+")
Thanks for both excellent answers!
... View more
You can perform a search and the pipe to delete ( | delete). That will delete that specific data from being searchable. In your case you could specify a certain source and host to target an individual log file source. It does not however reclaim disk space. The only way to do that is to clean the index. Note that there isn't a role that has access to delete by default and as a best practice you should not give that access anyways. So, you'll want to temporarily give access and then take it off afterward.
http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Delete
... View more
Look here at a previous answer to address this .
http://splunk-base.splunk.com/answers/29551/too-many-search-jobs-found-in-the-dispatch-directory
... View more
There's (at least) two ways of dealing with this. If you want to change the raw data within the event as it is being indexed then as cvajs suggested, SEDCMD is the route to take. It would look something like this:
[mysourcetype]
SEDCMD-date=s/\d{2}(\d{2})-(\d{2})-(\d{2})/\2\/\3\/\1/
(Assuming I got my sed syntax 100% correct)
Your strftime + strptime approach should work as well. It obviously does not change the data in the index, but it should update the field correctly. But, I think you have your format strings wrong:
... | eval last_updated_date=strftime(strptime(last_updated_date,"%Y-%m-%d"),"%m/%d/%y")
... View more
efelder0,
what do you mean by "date". date as in "MM/DD/YYYY" or date as "MM/DD/YYYY HH:MM:SS (AM|PM)"
i would opt to use [\d]{2}/[\d]{2}/[\d]{4} to grab MM/DD/YYYY just in case the space comes up missing (not likely, but you never know)
... View more
I think you want to use eval here. Something like
... | eval Total_Threat_Count=Critical_Severity + Medium_Severity + Low_Severity | table host, Total_Threat_Count
should work.
... View more
It doesn't change the regex statement, it changes the rest of your search. After you've extracted both fields, if you want them formatted in the way you're wanting, simply do something like:
<yoursearch> | eval last_updated_dt=last_updated_date+" "+last_updated_time
That will give you a field with the formatting you're looking for while remaining flexible having the time and date separated as FunPolice suggested.
... View more
This will work:
<your search> | eval Severity_string = case(Severity == 1, "Low", Severity == 2, "Medium", Severity == 3, "High", Severity == 4, "Critical")
Then just use Severity_string in table or whatever you use to display the results.
... View more
The link above is an old deprecated one. This is the new link to the Support phone numbers.
http://www.splunk.com/en_us/about-us/contact.html#tabs/customer-support
... View more
Pretty sure that there is no way to do this when the outputcsv command is called to generate the csv file with events from your search results. I would recommend using a script to clean up the CSV file after its been generated for which SED would be useful or using batch/cmd/powershell on Win systems.
... View more
This can be done using a scheduled search in conjunction with altering your search to use the index time rather than the indexed event time to create the CSV file you are looking.
So you will want to use something like...
sourcetype="Symantec" | eval myTime=now()-60 | where _indextime>myTime | outputcsv myFile.csv
To break down the important pieces of this search, the | eval myTime=now()-60 portion uses eval to create a field which contains the epoch time stamp value of when the search is run and subtracts 60 seconds from that value. We then use the | where _indextime>myTime statement to get all the events from the last minute by comparing an events index time against the myTime value. If you run your scheduled search for every minute then the search will only contain events logged within the last minute. This should prevent most cases of duplication for the events that are put in to your CSV file.
Depending on how often you run the scheduled search, you will want to change the myTime value to correspond to the time frame of your scheduled search. In doing so, remember that all the values are in epoch time using seconds.
... View more