All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

it's not work in my case, do you have another solution ?
This rule already has a default from Splunk, with the earliest rt-65m@m and latest rt-5m@m timerange. But doesn't the drilldown only follow the time when the event is triggered? 
Thank you, Giuseppe!   I appreciate the help!
Hi All, I want to download a search result as csv file into my local folder. Can anyone suggest me some good methods to do it and how can i do it? I saw some examples using curl command and rest ap... See more...
Hi All, I want to download a search result as csv file into my local folder. Can anyone suggest me some good methods to do it and how can i do it? I saw some examples using curl command and rest api, but couldn't able to understand that fully. can anyone help me in this?
I added the notables to an investigation and was able to add notes as well. However, I'm trying to have an incident number for the incident so that I can use it for tracking purpose. I recently lear... See more...
I added the notables to an investigation and was able to add notes as well. However, I'm trying to have an incident number for the incident so that I can use it for tracking purpose. I recently learnt that Splunk ES version 8 provides an incident number along with the investigation we create. I should test this out first, because I'm using Splunk ES version 5. Correct me if I'm wrong. Thanks for your assistance!
1. For a user to use Splunk support portal, should the user be granted access to the support portal? Don't they get the access inherently? 2. Company has 2 different instances of Splunk. Will the da... See more...
1. For a user to use Splunk support portal, should the user be granted access to the support portal? Don't they get the access inherently? 2. Company has 2 different instances of Splunk. Will the dashboard created in one be visible in another as well? Are the 2 instances independent of each other? Can you paint a picture for me, how they'd be related? 3. In order for me to know the answers to these questions, what concepts/topics should I know well?
thanks, works for me.   
Look at using INGEST_EVAL, where you can remove data from the JSON simply using eval statements, e.g. the following eval statement   _raw=json_delete(_raw, "avg_ingress_latency_fe", "conn_est_time_... See more...
Look at using INGEST_EVAL, where you can remove data from the JSON simply using eval statements, e.g. the following eval statement   _raw=json_delete(_raw, "avg_ingress_latency_fe", "conn_est_time_fe", "client_insights")   https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/IngestEval  
Hi @Ciccius  I feel your frustration - I've written multiple inputs and had issues like this and it can be pain to resolve. I've always found the best place to start is with the following: $SPLUNK_... See more...
Hi @Ciccius  I feel your frustration - I've written multiple inputs and had issues like this and it can be pain to resolve. I've always found the best place to start is with the following: $SPLUNK_HOME/bin/splunk cmd splunkd print-modinput-config yourSchema yourStanza If you've create a simple input then yourSchema might = yourStanza, however if you have are runnign as a single instance, but if not you might have multiple stanzas for a single instance (e.g. yourInput://stanza1 and yourInput://stanza2) If you run the above then it should spit out the schema for your stanza. If you get any errors then you should investigate! If you get an XML output then you can try running: $SPLUNK_HOME/bin/splunk cmd splunkd print-modinput-config yourSchema yourStanza | $SPLUNK_HOME/bin/splunk cmd python3 /opt/splunk/etc/apps/adsmart_summary/bin/getCampaignData.py In this scenario it is invoking the modular input as it would from within Splunk as a scheduled ExecProcess. This might give you more insight into the goings-on within your input.  I use this all the time to test inputs so I dont need to wait for the interval to pass!  Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
Hi @kemeris  Ive been having a play around with this, the only way I can make this work is using Saved Searches, as follows: Create saved searches for each platform in a format such as "MySearch - ... See more...
Hi @kemeris  Ive been having a play around with this, the only way I can make this work is using Saved Searches, as follows: Create saved searches for each platform in a format such as "MySearch - $platform$" (e.g. MySearch - Amazon) Create a dropdown with multiple options, each of which the value is set to the name of your saved searches (e.g. Name: Amazon, Value: MySearch - Amazon). Assume the name of your Dropdown is "ds_token" Create a base search in dashboard studio with the following search: | savedsearch $ds_token|s$ The |s (Pipe "S") will enclose the name in quotes. This will then load your saved search with the name set in the value of the dropdown when selected. You can use this search throughout your dashboard, or chain additional searches as required. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
You would therefore use "vs_name" in-place of "_raw" in the replace command. You can use multiple transforms on a single sourcetype - even if you're already using an INGEST_EVAL. For example == pr... See more...
You would therefore use "vs_name" in-place of "_raw" in the replace command. You can use multiple transforms on a single sourcetype - even if you're already using an INGEST_EVAL. For example == props.conf == [yourSourcetype] TRANSFORMS-defineIndex =.defineIndex TRANSFORMS-extractServerId =.extractServerId ... etc ... Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Sure @jialiu907  Just to mention, by default rex works on the _raw field, however you can specify field=<fieldName> to run it against a different field. Breakdown of the rex (regular expression): ... See more...
Sure @jialiu907  Just to mention, by default rex works on the _raw field, however you can specify field=<fieldName> to run it against a different field. Breakdown of the rex (regular expression): \)\: Matches a literal ) followed by a :. The backslash (\) escapes the closing parenthesis ) since it's a special character in regex. \s Matches a single whitespace character (space, tab, or newline). (?<Disconnect>SSLSocket Disconnected from Cloud) This is a named capturing group called Disconnect which means it creates your new Splunk field called "Disconnect". It captures the exact phrase "SSLSocket Disconnected from Cloud". - If there is no exact match (Case-Sensitive) then it will not match! The (?<name>pattern) syntax is used to name the capturing group and extract the field. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @MichalDerySam  I had this issue previously - I would recommend sending an email to splunkbase-admin@splunk.com with the details of your app and let them know you have submitted a new version and... See more...
Hi @MichalDerySam  I had this issue previously - I would recommend sending an email to splunkbase-admin@splunk.com with the details of your app and let them know you have submitted a new version and they will be able to un-archive it for you.  Their turnaround is usually relatively quick!   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
All the private apps on my splunk installation pass the jQuery and Python readiness.  It is just the public Config Explorer that failed the Python readiness.  I checked for customized code and editin... See more...
All the private apps on my splunk installation pass the jQuery and Python readiness.  It is just the public Config Explorer that failed the Python readiness.  I checked for customized code and editing of any files; by using the Linux CLI, I compared all the files in the etc/apps/config_explorer/...  with another splunk server that had the app installed that did not get the python incompatibility.  i compared filename, size, and date of the files and all the files were the same on each server; since one server flags config_explorer with a python version incompatibility, I wonder now if the Upgrade Readiness App has code in need of updating.   The wget command recommended in the link ".../9.4.0/UpgradeReadiness/ResultsPython" that will "undo dismissed apps" was helpful in reversing the effect of "Dismiss App Alert", but the Config Explorer now fails the python scan because "This newly installed App has not completed the necessary scan." instead of the Python incompatibility.  I don't know if the wget that reverses the dismiss app alert and the new error from upgrade readiness are related.
my app has been archived , i uploaded a new version today "Compatibility Report" passed all checks  but it didn't help the app is still in "archived" mode. any idea what i need to do to solve it an... See more...
my app has been archived , i uploaded a new version today "Compatibility Report" passed all checks  but it didn't help the app is still in "archived" mode. any idea what i need to do to solve it and to be able to activate it again? Thanks
I think this "admin_all_objects" privilege is needed by the app to access the client secret stored in the app, which is used to authenticate the advanced hunting requests. There is another app (http... See more...
I think this "admin_all_objects" privilege is needed by the app to access the client secret stored in the app, which is used to authenticate the advanced hunting requests. There is another app (https://splunkbase.splunk.com/app/6463) which appears to do the same thing albeit with a differently named command "defkqlg". It says in the Details tab that you can use the "edit_storage_passwords" capability instead of "admin_all_objects" if your Splunk Enterprise version is later than 9.1.0. It might also be possible to use edit_storage_passwords privilege instead on the MS Defender Advanced Hunting app, but it would need to be tested.
Turns out the issue was the break and inspect from the corporate firewall. Standard global git config fix didn't work, as it seems that as part of the install process, SOAR changes the config key to ... See more...
Turns out the issue was the break and inspect from the corporate firewall. Standard global git config fix didn't work, as it seems that as part of the install process, SOAR changes the config key to http.sslcainfo=$SOAR_HOME/etc/cacerts.pem. Modifying that cacerts.pem file to add the full chain of certs you get when navigating to GitHub from a browser on the same network ended up working to get SOAR to install successfully.
Our Security partners at work recently determined that their analyst need the ability to run the custom command:  advhunt (TA_ms_advanced_hunting) The custom command indicates: To use this a... See more...
Our Security partners at work recently determined that their analyst need the ability to run the custom command:  advhunt (TA_ms_advanced_hunting) The custom command indicates: To use this app, users need following privileges. list_storage_passwords admin_all_objects We do not want to give all Security users the ability to admin_all_objects. What other options do we have? 
Hi @seiimonn ! Debian GNU/Linux 12 (bookworm) Splunk Enterprise 9.0.0 AME 3.0.8. Sysinfo: {"uuid":"95c6740c-9e0b-42b1-b2b9-b78067db6677","status":200,"messages":[],"payload":{"tenant_list":[{"te... See more...
Hi @seiimonn ! Debian GNU/Linux 12 (bookworm) Splunk Enterprise 9.0.0 AME 3.0.8. Sysinfo: {"uuid":"95c6740c-9e0b-42b1-b2b9-b78067db6677","status":200,"messages":[],"payload":{"tenant_list":[{"tenant_uid":"default","role":"admin"}],"is_admin":true,"is_app_admin":true,"products":[],"necessary_tasks":[],"legacy_installed":false,"environment":"on_premises","timezone":"UTC"}} There ara no errors now, if i run this script: index=_internal source=*ame* ERROR | table _time host source _raw But maybe these ara interesting in the splunkd.log: 19/02/2025 19:34:15.506   02-19-2025 19:34:15.506 +0100 WARN HttpListener [1069 HttpDedicatedIoThread-4] - Socket error from 127.0.0.1:37790 while accessing /servicesNS/nobody/alert_manager_enterprise/properties/server: Broken pipe host = splunk source = /opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd   19/02/2025 19:34:08.828   2025-02-19 19:34:08,828 INFO [assist::supervisor_modular_input.py] [context] [build_supervisor_secrets] [22691] Secret load failed, key=tenant_id, error=[HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/splunk_assist/storage/passwords/tenant_id?output_mode=json host = splunk source = /opt/splunk/var/log/splunk/splunk_assist_supervisor_modular_input.log sourcetype = splunk_assist_internal_log   19/02/2025 19:34:06.362   02-19-2025 19:34:06.362 +0100 WARN HttpListener [1068 HttpDedicatedIoThread-3] - Socket error from 127.0.0.1:52422 while accessing /servicesNS/nobody/alert_manager_enterprise/properties/server: Broken pipe host = splunk source = /opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd   19/02/2025 19:33:56.500   2025-02-19 19:33:56.500 +0100 Trace-Id= type=METER, name=ch.qos.logback.core.Appender.error, count=3, m1_rate=3.527460396057507E-12, m5_rate=9.325633072421824E-5, m15_rate=7.016228689718483E-4, mean_rate=0.0019981503731937404, rate_unit=events/second host = splunk source = /opt/splunk/var/log/splunk/splunk_app_db_connect_health_metrics.log sourcetype = dbx_health_metrics   19/02/2025 19:33:54.415   2025-02-19 19:33:54,415 INFO [assist::supervisor_modular_input.py] [context] [build_supervisor_secrets] [22467] Secret load failed, key=tenant_id, error=[HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/splunk_assist/storage/passwords/tenant_id?output_mode=json host = splunk source = /opt/splunk/var/log/splunk/splunk_assist_supervisor_modular_input.log sourcetype = splunk_assist_internal_log   I use this script, to create test alerts: | makeresults | eval user="World", src="192.168.0.1", action="create test event" | sendalert create_alert param.title="Hello $result.user$" param.template=default param.tenant_uid=default I think there is nothing interesting on the browsers developer console. What do you think about that? Thanks for your helping.
Hi Andras Which OS, Splunk version and AME version are you using? Is there anything visible in the browser developer console? Can you check if something loads if you open this URL in your browser... See more...
Hi Andras Which OS, Splunk version and AME version are you using? Is there anything visible in the browser developer console? Can you check if something loads if you open this URL in your browser? https://your-splunk:8000/en-GB/splunkd/__raw/services/ame_sysinfo Are there any errors visible with  index=_internal source=*ame* ERROR | table _time host source _raw   Please open a support case if you cannot share this information publicly. Regards, Simon