All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I am now adding a new action "ingest excel" to the existing SOAR App CSV Import. Two dependencies are required to be installed for this action: pandas and openpyxl. However, after adding the de... See more...
Hi, I am now adding a new action "ingest excel" to the existing SOAR App CSV Import. Two dependencies are required to be installed for this action: pandas and openpyxl. However, after adding the dependencies in App Wizard, it still show me the output  ModuleNotFoundError: No module named 'pandas' I found that in the app JSON, my dependencies in only added to "pip_dependencies" , but not  "pip39_dependencies". Is that the reason why dependency is not installed? Please advise. Thank you.        
@bowesmana Sorry for the delay in response, I was on vacation.  Thanks for sharing the cluster command,  I tried but it is not giving me the required result or I am not using it correctly. I shared ... See more...
@bowesmana Sorry for the delay in response, I was on vacation.  Thanks for sharing the cluster command,  I tried but it is not giving me the required result or I am not using it correctly. I shared only one part of the requirement. Actually the requirement is to compare two days' logs (today and yesterday) coming from different apps and trigger alert whenever there is a new error. There is no specific error pattern or fields to identify errors, we need to look for the keywords "Error/Fail/Timeout in the logs." I am trying to identify similar phrases in error logs and store the unique error text in a  lookup file  and then match it with the next day's data to identify new error log. Query : index="a" OR index="b" (ERROR OR TIMEOUT OR FAIL OR EXCEPTION)
@oO0NeoN0Oo  I think JavaScript Fetch API is not directly available in splunk dashboard js. But you can try this JS code which is working fone for me. var settings = { "url": "https://l... See more...
@oO0NeoN0Oo  I think JavaScript Fetch API is not directly available in splunk dashboard js. But you can try this JS code which is working fone for me. var settings = { "url": "https://localhost:8088/services/collector/event", "method": "POST", "timeout": 0, "headers": { "Authorization": "Splunk hec_token", "Content-Type": "application/json" }, "data": JSON.stringify({ "sourcetype": "my_sample_data", "event": "this is my data!" }), }; $.ajax(settings).done(function (response) { console.log(response); });   As you are sending events from the browser you should gothrogh below link as well. https://www.splunk.com/en_us/blog/tips-and-tricks/http-event-collector-and-sending-from-the-browser.html I hope this will help you. Thanks KV An upvote would be appreciated if any of my replies help you solve the problem or gain knowledge.
Hello, i have started my journey in more admin activities. Currently I was attempting to add a URL (comment) under the "Next Steps" in a notable event, but it is grayed out. I currently gave my user ... See more...
Hello, i have started my journey in more admin activities. Currently I was attempting to add a URL (comment) under the "Next Steps" in a notable event, but it is grayed out. I currently gave my user all relatable privileges (so this doesn't seem to be the issue). I also tried to edit this by going through configure, content management, and then attempting to edit the search (alert) from there but while trying to edit the notable it is grayed out without the option to edit and with the comment ""this alert action does not require any user configuration". I realize it is easier to edit that part for correlation searches, but I am attempting to edit alerts not correlation searches. 
Renaming of authorize.conf to authorize.conf.old in system/local helped in my case
Hi @N_K , I would, in a nutshell, use SSH action to create a temp unique folder locally on SOAR, then use SSH action "put file" to read from the vault your files and write them to this folder one by... See more...
Hi @N_K , I would, in a nutshell, use SSH action to create a temp unique folder locally on SOAR, then use SSH action "put file" to read from the vault your files and write them to this folder one by one. When all files are put in the folder, run a SSH command to archive them and finally upload it to Jira directly or send it to the vault and then send to Jira. Confirming that Jira action is completed, you can remove the temp unique folder and that will remove the local files to save space. You can also remove the files from the vault at this time. Have you tried this logic?  
Do you get the CLICKED message in the console log or any other message? I assume you've looked at the dashboard examples for setting tokens on buttons, as your code is similar. If you add logging to... See more...
Do you get the CLICKED message in the console log or any other message? I assume you've looked at the dashboard examples for setting tokens on buttons, as your code is similar. If you add logging to the base code, does anything get logged at all.
Hi @silverKi, The maxDataSize for your hot buckets is 1 MB. Your friend's setting appears to be higher (5 MB). To add to what's already been written, you're writing (compressed) data at different r... See more...
Hi @silverKi, The maxDataSize for your hot buckets is 1 MB. Your friend's setting appears to be higher (5 MB). To add to what's already been written, you're writing (compressed) data at different rates: Friend: ~720 bytes per second You: ~19 bytes per second This will influence the size of the warm bucket after it rolls from hot when either maxDataSize (1 MB in your case) or the default maxHotSpanSecs value of 90 days has been exceeded. Hot buckets can also roll to warm when Splunk is restarted or when triggered manually. That probably isn't happening here, but it's worth noting.
I suppose that SHOULD_LINEMERGE=true is that way from some historical currently unknown reason and nobody has so brave that change its default to false
My shot in the dark would be that you're trying to use fetch() to push events to a server A from a webpage coming from server B. And you're hitting CORS problems.
As you can check from there https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Commandsbytype also this command move actions from indexers to SH side. And as @PickleRick said this com... See more...
As you can check from there https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Commandsbytype also this command move actions from indexers to SH side. And as @PickleRick said this command use lot of memory too.
You have very little data in your buckets. And comparing bucket sizes from two different environments with different data (especially if there's so little of that data) makes no sense. Normally you'... See more...
You have very little data in your buckets. And comparing bucket sizes from two different environments with different data (especially if there's so little of that data) makes no sense. Normally you'd expect buckets of several dozens or even hundreds of megabytes.
Adding to what's already been said - there is very rarely a legitimate use case for SHOULD_LINEMERGE. Relying on Splunk recognizing something as date to break data stream into events is not a very go... See more...
Adding to what's already been said - there is very rarely a legitimate use case for SHOULD_LINEMERGE. Relying on Splunk recognizing something as date to break data stream into events is not a very good idea. You should rather set a proper LINE_BREAKER.
Depends on what the desired outcome looks like. Since stats produces aggregated results you have to ask yourself what is it you really want. If you just want to add some aggregated value to each resu... See more...
Depends on what the desired outcome looks like. Since stats produces aggregated results you have to ask yourself what is it you really want. If you just want to add some aggregated value to each results row - that's what eventstats is for (be careful with it though because it can be memory-hungry). If you want to get aggregated field values you might use values() or list() as additional aggregation functions.
And you really think this is a common use case? I use windows at home in a VM with a GPU passthrough but I don't say that it's something used at a typical desktop.
So I have got it working 99% I did something like this Index=xxxxxx "Starting iteration" OR "Stopping iteration" | stats earliest(_time) as Start,latest(_time) as Stopped | eval Taken=tostring(... See more...
So I have got it working 99% I did something like this Index=xxxxxx "Starting iteration" OR "Stopping iteration" | stats earliest(_time) as Start,latest(_time) as Stopped | eval Taken=tostring(Stopped-Start) | eval Taken=Taken/60 | eval Time_Taken=(if(Taken>15,"Not Good","Good")) | where Time_Taken="Not Good" | table Start Stopper Time_Taken Now it shows Not Good if over 15 mins The issue is how to set the alert properly - as if I set to check every 15 mins - it may overlap 2 starts - Example Started at 7pm and finished at 7.08pm Alert checks at like 7.25pm for the last 15 mins and it sees 7.08pm at Stopped then 7.15pm Start and maybe finished at 7.24pm - If that make sense to you guru's  
Hi @Vignesh, There is no documented REST API, but the SA-ThreatIntelligence app exposes the alerts/suppressions service to create, read, and update (including disable and enable) suppressions. To de... See more...
Hi @Vignesh, There is no documented REST API, but the SA-ThreatIntelligence app exposes the alerts/suppressions service to create, read, and update (including disable and enable) suppressions. To delete suppressions, use the saved/eventtypes/{name} endpoint (see https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTknowledge#saved.2Feventtypes.2F.7Bname.7D). Search, Start Time, and End Time are joined to create SPL stored as an event type named notable_suppression-{name}, e.g.: `get_notable_index` _time>1737349200 _time<1737522000 Description and status are stored as separate properties. You can confirm this in $SPLUNK_HOME/etc/apps/SA-ThreatIntelligence/local/eventtypes.conf: [notable_suppression-foo] description = bar disabled = true search = `get_notable_index` _time>1737349200 _time<1737522000 Add -d output_mode=json to any of the following examples to change the output from XML to JSON. Create a suppression: Name: foo Description (optional): bar Search: `get_notable_index` Start Time (optional): 1/20/2025 (en-US locale in this example) End Time (optional): 1/22/2025 (en-US locale in this example) Status: Enabled curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions \ --data-urlencode name=notable_suppression-foo \ --data-urlencode description=bar \ --data-urlencode 'search=`get_notable_index` _time>1737349200 _time<1737522000' \ --data-urlencode disabled=false Read a suppression: curl -k -u admin -X GET https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo Modify a suppression: Description: baz Search: `get_notable_index` Start Time (optional): (none) End Time (optional): (none) Status: (unchanged) curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo \ --data-urlencode description=baz \ --data-urlencode 'search=`get_notable_index`' Disable a suppression: curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo \ --data-urlencode disabled=true Enable a suppression: curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo \ --data-urlencode disabled=false Delete a suppression: curl -k -u admin:pass -X DELETE https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/saved/eventtypes/notable_suppression-foo  
One comment: Never use table before stats! After table all processing has moved into SH and it cannot utilize parallel processing with stats. If you want remove some fields before stats use always fie... See more...
One comment: Never use table before stats! After table all processing has moved into SH and it cannot utilize parallel processing with stats. If you want remove some fields before stats use always fields instead of table! You will get more performance that way. Of course after stats you processing continues on SH side, but stats use preprocessing part on each indexers at same time and only merging and final stats processing are done on SH side.
Hi @Ste , you have to add to your stats command: values(*) AS * in your case: | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode | where sto... See more...
Hi @Ste , you have to add to your stats command: values(*) AS * in your case: | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode | where stoerCode IN ("K02") | stats count as periodCount values(*) AS * by zbpIdentifier | sort -periodCount | head 10 | fields zbpIdentifier zbpIdentifier_bp periodCount importZeit_uF but they are grouped for the zbpIdentifier. Ciao. Giuseppe
Dear experts According to the documentation after stats, I have only the fields left used during stats.  | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode ... See more...
Dear experts According to the documentation after stats, I have only the fields left used during stats.  | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode | where stoerCode IN ("K02") | stats count as periodCount by zbpIdentifier | sort -periodCount | head 10 | fields zbpIdentifier zbpIdentifier_bp periodCount importZeit_uF To explain in detail: After table the following fields are available:  importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode After stats count there are only  zbpIdentifier periodCount left. Question:  How to change the code above to get the count, and have all fields available as before? Thank you for your support.