All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I removed "(actor.displayName)" from the first "count" command and it works now.
Thank you fro the quick reply. It does not seem to run with individual category as well.  Should we consider update health checks which take me to the Splunk Health Assistant Add-on which i... See more...
Thank you fro the quick reply. It does not seem to run with individual category as well.  Should we consider update health checks which take me to the Splunk Health Assistant Add-on which is archived?
Hi all,  I am trying to install Splunk Security Essentials into a single instance of Splunk with a downloaded file of the app, via the GUI. The documentation does not have any pre-install steps. An... See more...
Hi all,  I am trying to install Splunk Security Essentials into a single instance of Splunk with a downloaded file of the app, via the GUI. The documentation does not have any pre-install steps. Any suggestions would be welcome thanks.    Splunk 9.3.1 Splunk Security Essentials 3.8.0 Error:  There was an error processing the upload. Error during app install: failed to extract app from /tmp/tmp6xz06m51 to /opt/splunk/var/run/splunk/bundle_tmp/7364272378fc0528: No such file or directory  
I'm trying to create an alert. The alert's query ends with " | stats values(*) as * by actor.displayName | stats count(actor.displayName)". I want to add the clause, " | where count > 5" at the ... See more...
I'm trying to create an alert. The alert's query ends with " | stats values(*) as * by actor.displayName | stats count(actor.displayName)". I want to add the clause, " | where count > 5" at the end of the query. To verify that the query would work, I changed it "| where count < 5", but I'm getting no results.  
Does it run anything if you select individual category instead of all categories? If its running with fewer categories it could an issue related to load on the DMC to run all categories at once. W... See more...
Does it run anything if you select individual category instead of all categories? If its running with fewer categories it could an issue related to load on the DMC to run all categories at once. We had a similar kinda bug in the previous 8.2 versions but not on V9 as I see on my side.      
Probably because you gave an incomplete description of the events you are working with, the fields that have already been extracted, the SPL you used to get those results, and what your expected resu... See more...
Probably because you gave an incomplete description of the events you are working with, the fields that have already been extracted, the SPL you used to get those results, and what your expected result should look like. We can only work with what you provide. We do not have access to your environment, so we are guessing most of the time.
I am trying to run the Health check on the DMC. Health check dashboard loads fine from the checklist.conf as per the default and local directory. Our splunk version is 9.3.0. After clicking the sta... See more...
I am trying to run the Health check on the DMC. Health check dashboard loads fine from the checklist.conf as per the default and local directory. Our splunk version is 9.3.0. After clicking the start button it gets stuck at 0%. can i know what could be this issue?  
Hello! here is the document which explains using inputs. Please expand the code and look out here: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/DashStudio/inputs "inputs": { "i... See more...
Hello! here is the document which explains using inputs. Please expand the code and look out here: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/DashStudio/inputs "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, This is the link for Link to a report: https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/DashStudio/linkURL#Link_to_a_report If none of these are helping you out, please try creating your dashboard in classic and convert into studio, you might be able to find the difference.  Please UpVote if this helps.  
Hi @gcusello  Thank you for replying on this post. I am also interested on detecting SQL injections. However, the first two links seems to be outdated and no longer referring to SQL injections su... See more...
Hi @gcusello  Thank you for replying on this post. I am also interested on detecting SQL injections. However, the first two links seems to be outdated and no longer referring to SQL injections subject. https://answers.splunk.com/app/questions/1528.html http://blogs.splunk.com/2013/05/15/sql-injection/ Could you please update them ? 
Below is how an inputs.conf would look like. [http://adt] disabled = false sourcetype =adt:audit token = 977xx0B5-E5xx-4xx1-A894-B5DA75XX3A31 indexes =adt_audit index =adt_audit  For crea... See more...
Below is how an inputs.conf would look like. [http://adt] disabled = false sourcetype =adt:audit token = 977xx0B5-E5xx-4xx1-A894-B5DA75XX3A31 indexes =adt_audit index =adt_audit  For creating a token you can use  token generator tools like thishttps://www.uuidgenerator.net/  -  Each token has a unique value, which is a 128-bit number that is represented as a 32-character globally unique identifier (GUID). Agents and clients use a token to authenticate their connections to HEC.  Another way is can create something from web via data inputs >http> and it get saved on the etc/apps/splunk_http_input/local/inputs.conf to see what it looks like.   Hope this helps, please upvote or solved if this solution is helpful.
Hi @sverdhan , Go in [Settings > Licensing > License Usage > Previous 60 days > Split by Sourcetype] and you'll have your search that will be: index=_internal [`set_local_host`] source=*license_usa... See more...
Hi @sverdhan , Go in [Settings > Licensing > License Usage > Previous 60 days > Split by Sourcetype] and you'll have your search that will be: index=_internal [`set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by st fixedrange=false | fields - _timediff | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] Ciao. Giuseppe
Hi @PickleRick  I accepted your previous suggestion using eventstats as solution Thank you for your help.   Thanks for bringing me into attention that eventstats also have memory limitation, anot... See more...
Hi @PickleRick  I accepted your previous suggestion using eventstats as solution Thank you for your help.   Thanks for bringing me into attention that eventstats also have memory limitation, another limits.conf that only admin can modify       The default setting is max_mem_usage_mb=200MB.  How do I know if my dataset will ever hit 200 or not? Does streamstats have limitation? How do I use streamstats in my case?    When I changed eventstats to streamstats, it will show incremental data, instead of total, so I need to figure out on how to filter out if it's possible. Thanks Streamstats result dc ip location name dc    ip location name 1 1.1.1.1 location-1 name0 2 1.1.1.1 location-1 name1 1 1.1.1.2 location-2 name2 1 1.1.1.3 location-3 name0 2 1.1.1.3 location-3 name3 1 1.1.1.4 location-4 name4 2 1.1.1.4 location-4 name4b 1 1.1.1.5 location-0 name0 1 1.1.1.6 location-0 name0 0 1.1.1.7 location-7  
Hi @sbhatnagar88 , no, it's correct: in linux you can tar and untar the full splunk home directory. Remember only to mount all the partitions in the same mount point of the original, if you cannot,... See more...
Hi @sbhatnagar88 , no, it's correct: in linux you can tar and untar the full splunk home directory. Remember only to mount all the partitions in the same mount point of the original, if you cannot, you have to modify the $SPLUNK_DB parameter in splunk-launch.conf. Ciao. Giuseppe
Getting this result, i dont see tag name in list 20 Per Page Format Preview     _time NULL 2024-10-02 15:45:39.507 2 2024-10-02 15:45:39.508 1 2024-10-02 15:45:39.516 1 ... See more...
Getting this result, i dont see tag name in list 20 Per Page Format Preview     _time NULL 2024-10-02 15:45:39.507 2 2024-10-02 15:45:39.508 1 2024-10-02 15:45:39.516 1 2024-10-02 15:46:14.196 6 2024-10-02 15:46:14.199 3
Something like this  index=B | stats count by Reporting_Host | search NOT [| inputlookup inventory.csv ] | table Hostname ] inventory.csv has the table pickup from index A.  Lookup query is - ind... See more...
Something like this  index=B | stats count by Reporting_Host | search NOT [| inputlookup inventory.csv ] | table Hostname ] inventory.csv has the table pickup from index A.  Lookup query is - index=B | stats values(IP address) by Hostname Operating_system PS This query is not working, just a thought.
| spath c7n:MatchedFilters{} output=MatchedFilters | foreach mode=multivalue MatchedFilters [| eval MatchedFilters=trim(MatchedFilters,"tag:")] | chart count over _time by MatchedFilters useother=f
  In Splunk, hot buckets are where incoming data is actively written and indexed. These buckets hold the most recent data and are immediately searchable. Once a hot bucket reaches its ... See more...
  In Splunk, hot buckets are where incoming data is actively written and indexed. These buckets hold the most recent data and are immediately searchable. Once a hot bucket reaches its size or time limit, it transitions into a warm bucket. Warm buckets store data that is no longer being written to but remains searchable. ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
This is the query I have figured out from awesome Splunk community    index=my-index "kubernetes.namespace_name"="namus" "cluster_id":"*stage*" "Env":"stg" "loggerName":"com.x.x.x.SomeClass" "M... See more...
This is the query I have figured out from awesome Splunk community    index=my-index "kubernetes.namespace_name"="namus" "cluster_id":"*stage*" "Env":"stg" "loggerName":"com.x.x.x.SomeClass" "My simple query for key=" "log.level"=INFO | spath output=x log.message | rex max_match=0 field=x "(?<key>\w+)=(?<value>\w+)" | eval z=mvzip(key, value, "~") | mvexpand z | rex field=z "(?<key>[^~]+)~(?<value>.*)" | table key value | eval dummy="" | xyseries dummy key value | fields - dummy   Which results in this output. I am missing lot of data. Can someone show how to list all the rows found. What is that I am missing here? 
Hi @yuanliu  Unfortunately none of the below queries are working for me.  First one is crashing splunk so unable to test it.  Second one, I don't get any results.  Could be because the field "Reporti... See more...
Hi @yuanliu  Unfortunately none of the below queries are working for me.  First one is crashing splunk so unable to test it.  Second one, I don't get any results.  Could be because the field "Reporting_Host" is present only in index B and since we are excluding index B in the next step, the results are 0.  However I tried renaming the Hostname field in index A and try running the query but no results.   Can we test this scenario using a look up table, that might improve the search performance.  Can you give me something in this regards?
Has been officially registered as a bug.  No ETA on fix