All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

my free 60 days trial got expired and now I have updated the license to a free trial, but now I'm unable to use search head of Splunk.  each time I get this "Error in 'lit search' command: Your Splun... See more...
my free 60 days trial got expired and now I have updated the license to a free trial, but now I'm unable to use search head of Splunk.  each time I get this "Error in 'lit search' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK."   I have already updated the license and restarted the application. Please help me with this. 
Hello @gcusello  Based on your query, it will return the value from lookup values. Once return the values the index will use it to search the events. What  I am looking for is to check only if the... See more...
Hello @gcusello  Based on your query, it will return the value from lookup values. Once return the values the index will use it to search the events. What  I am looking for is to check only if the events fields severity location vehicle value is present inside lookup field values. Anyway, I appreciate your idea. Thank you.
haha, so you are the same one, awesome!
Hi @skramp, if you are talking about this post , i am the post writer, thought to post the question in Splunk community to get support in this. Below is one of the CSVs i have tried import: Servic... See more...
Hi @skramp, if you are talking about this post , i am the post writer, thought to post the question in Splunk community to get support in this. Below is one of the CSVs i have tried import: ServiceTitle;ServiceDescription;DependentServices; Splunk;;SHC | IND; SHC;;Server1; IND;;server2; server1;;; server2;;; in the 1st environment it gives: File preview : 0 total lines   in the 2nd environment, it gets imported successfully, displaying all the rows.
I have an alert which I am trying to throttle based on few fields from the alert on condition if it triggers once then it shouldn't trigger for next 3 days unless it has different results. But the a... See more...
I have an alert which I am trying to throttle based on few fields from the alert on condition if it triggers once then it shouldn't trigger for next 3 days unless it has different results. But the alert is running every 15 mins and I can see same results of the alert every 15 mins. My alert is outputting the results to another index Example: blah blah , ,  , , ,  , , ,  | collect index=testindex sourcetype=testsourcetype Based on my research, I came across a post where it says  Since a pipe command is still part of the search, throttling would have no effect o because the search hasn't completed yet and can't be throttled. I think this because the front end says After an alert is triggered, subsequent alerts will not be triggered until after the throttle period, but that doesn't say "they aren't run" Is it the case? If so how can I stop updating the duplicate values in my index
I am in the middle of a Splunk migration. One of the tasks is to moved data from some sourcetypes onto the new servers using the | collect index=aws sourcetype=* command. The numbers added up after... See more...
I am in the middle of a Splunk migration. One of the tasks is to moved data from some sourcetypes onto the new servers using the | collect index=aws sourcetype=* command. The numbers added up after running checks. I run the same checks again a day later and the numbers no longer match up. Source 1 -> Old Splunk New Splunk Source 2 -> Old Splunk New Splunk August 12,478,853 12,478,853   26,171,911 26,171,911   24 hours later Source 1 -> Old Splunk New Splunk Source 2 -> Old Splunk New Splunk   12,478,853 12,477,696   26,171,911 3,001,183   I've set the following stanza within the indexes.conf file on the deployment server. Also the index only contains 22gb of data. Can you help? [aws] coldPath = $SPLUNK_DB\$_index_name\colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB\$_index_name\db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB\$_index_name\thaweddb frozenTimePeriodInSecs=94608000
Hi @Rhidian , you have to move only data indexed on that Indexer and not replicated data. You can distinguish them by the folder name: locally indexed data have folder names that start with db_ ... See more...
Hi @Rhidian , you have to move only data indexed on that Indexer and not replicated data. You can distinguish them by the folder name: locally indexed data have folder names that start with db_ replicated data data have folder names that start with rt_ In this way, you can create your own script. Obviously only folders in the cold folder of each index. Ciao. Giuseppe
@deepakc Already gone thru this. But like I said need examples on how SVC figure is actually getting calculated 
This blog should help with some SVC information   https://www.splunk.com/en_us/blog/platform/what-is-splunk-virtual-compute-svc.html?locale=en_us
Does anyone have an example of a coldtofrozenscript to be deployed in a clustered enviorment, I'm weary of having duplicate buckets etc?
Hi @iamtheclient20 , if you can define a rule also for the other fields in lookup (e.g. the relevant word is the first in the field value), you could apply the regex approach alto to the other field... See more...
Hi @iamtheclient20 , if you can define a rule also for the other fields in lookup (e.g. the relevant word is the first in the field value), you could apply the regex approach alto to the other fields, e.g.: index=test [ | inputlookup testLookup.csv | rex field=severity "^(?<severity>\w+)" | rex field=location "^(?<location>\w+)" | rex field=vehicle "^(?<vehicle>\w+)" | fields severity location vehicle ] | table severity location vehicle Otherwise, it isn't possible. let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking. Giuseppe P.S.: Karma Points are appreciated
Hi again, it's not only the introspection_generator_addon app which is the reason for the memory leak, it's a general bug. This information we got from Splunk support last week: Engineering have fo... See more...
Hi again, it's not only the introspection_generator_addon app which is the reason for the memory leak, it's a general bug. This information we got from Splunk support last week: Engineering have found that in the instrumentation source code, an edge condition where a process handle was opened, queried for information and failed to close the handle -- which leads to an open handle on the process (in this case with your app it more often with python -- BUT it could have been any process under the splunk subprocess tree). As a result of the open handle in the introspection code, when the process in question terminates, the OS will not release all the resources of said terminating process because the reference count to that process is not 0. It  is a bit of a racy condition depending on the state of the process. In our environment we have disabled also the apps python_upgrade_readiness_app, splunk_assist and splunk_secure_gateway because they are also starting sub-processes which are leaving behind zombie processes (and by the way we observed a high CPU utilization by python3.exe processes started by splunk_assist app). Another workaround mentioned by Splunk is to stop the splunkd.exe process with an extreme high handle count. If Splunk instrumentation is need on the system the problem can be avoided with restarting of the Splunk instrumentation this could be archived from power shell and using the command below and schedule this with the windows Task scheduler. Get-Process | Where-Object {$_.ProcessName -eq 'splunkd'} | Where-Object {$_.HandleCount -GE 5000} | Stop-Process Here an output of a run on a test server: PS C:\Windows\system32> Get-Process | Where-Object {$_.ProcessName -eq 'splunkd'} Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName ------- ------ ----- ----- ------ -- -- ----------- 424 51 347948 193552 15.913,17 2448 0 splunkd 77846 23 57020 58536 855,06 7680 0 splunkd PS C:\Windows\system32> Get-Process | Where-Object {$_.ProcessName -eq 'splunkd'} | Where-Object {$_.HandleCount -GE 5000} | Stop-Process Confirm Are you sure you want to perform the Stop-Process operation on the following item: splunkd(7680)? [Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"): PS C:\Windows\system32> Get-Process | Where-Object {$_.ProcessName -eq 'splunkd'} Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName ------- ------ ----- ----- ------ -- -- ----------- 424 51 345888 193540 15.913,70 2448 0 splunkd 309 23 36524 38020 0,58 125536 0 splunkd yes, this works, but we don't want to rollout it to all our servers.    
Hi @gcusello, thank you for giving ideas. But is not meet my requirements cause I also need to compare the event field values of location and vehicle as well. If the event fields severity location... See more...
Hi @gcusello, thank you for giving ideas. But is not meet my requirements cause I also need to compare the event field values of location and vehicle as well. If the event fields severity location vehicle values present inside the lookup field values severity location vehicle it will tag as all 3 to true Thank you
maybe |getservice can also help |getservice   
Hi @isoscow , I am doing this regulary, I create a new event with a correlation search which is added to my episode. In this event there are new fields with the value I want to send to my ticketing s... See more...
Hi @isoscow , I am doing this regulary, I create a new event with a correlation search which is added to my episode. In this event there are new fields with the value I want to send to my ticketing system. My Action Rule in my NEAP reacts on this fields. Here is also the conf talk Peter Zumbrink and I did this year at .conf24 where we are telling how we are doing this: https://conf.splunk.com/watch/conf-online.html?search=OBS1137C#/
There is a project on github for doing bulk editing called itsi_toolbox, maybe this can help. But be careful, it's not an official tool which is supported by splunk and you can destroy your whole its... See more...
There is a project on github for doing bulk editing called itsi_toolbox, maybe this can help. But be careful, it's not an official tool which is supported by splunk and you can destroy your whole itsi very easy and fast!
Hi @iamtheclient20 , in general, you can use a subsearch with the inputlookup command, but in your specific cas isn't applicable because in your lookup you have a more detailed field ("high octane")... See more...
Hi @iamtheclient20 , in general, you can use a subsearch with the inputlookup command, but in your specific cas isn't applicable because in your lookup you have a more detailed field ("high octane"), instead in the text you have only "octane". the opposite it's possible but bit this use case. The only workaround, could be use an elaborated field starting from the severity field in the lookup, something like this: index=test [ | inputlookup testLookup.csv | rex field=severity "^(?<severity>\w+) | fields severity" | table severity location vehicle but I don't know if this could meet your requirements. Ciao. Giuseppe
  index=test | table severity location vehicle severity  location vehicle high Pluto Bike   testLookup.csv severity location vehicle high octane Pluto ... See more...
  index=test | table severity location vehicle severity  location vehicle high Pluto Bike   testLookup.csv severity location vehicle high octane Pluto is one of the planet Bike has 2 wheels   As you can see on my table i have events. Is there a way to compare my table events to my testLookup.csv field values without using lookup command or join command ? Example. if my table events severity value have matched or has word same as "high" inside the severity in lookup field severity value then it is true otherwise false. Thank you.
A few days ago I've seen someone in the Slack Usergroup who also had a problem with ITSI 4.18.1, but just on one of his environments. Can you explain your error a little more specific, do you get an ... See more...
A few days ago I've seen someone in the Slack Usergroup who also had a problem with ITSI 4.18.1, but just on one of his environments. Can you explain your error a little more specific, do you get an error message or does Splunk only tells you there is no content in your csv? Can you please try to import your csv on a different Splunk environment, maybe which is in a different version? Just to check.
Hello,  we have decided to retire SPLUNK and the server that SPLUNK was running on. If the server is decommissioned, do we still need to decommission SPLUNK - or would one equal the other? If it wou... See more...
Hello,  we have decided to retire SPLUNK and the server that SPLUNK was running on. If the server is decommissioned, do we still need to decommission SPLUNK - or would one equal the other? If it wouldn't, is there a way to still decommission SPLUNK after the server has been decommissioned? Thank you.