All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

See if this run-anywhere example query helps. | makeresults | eval data="1997-10-10 15:35:13.046, CREATE_DATE=\"1997-10-10 13:36:22.742479\", LAST_UPDATE_DATE=\"1997-10-10 13:36:22.74\", ACTION=\"e... See more...
See if this run-anywhere example query helps. | makeresults | eval data="1997-10-10 15:35:13.046, CREATE_DATE=\"1997-10-10 13:36:22.742479\", LAST_UPDATE_DATE=\"1997-10-10 13:36:22.74\", ACTION=\"externalFactor\", STATUS=\"info\", DATA_STRING=\"<?xml version=\"1.0\" encoding=\"UTF-8\"?><externalFactor><current>parker</current><keywordp><encrypted>true</encrypted><keywordp>******</keywordp></keywordp><boriskhan>boriskhan1-CMX_PRTY</boriskhan></externalFactor>\" 1997-10-10 15:35:13.046, CREATE_DATE=\"1997-10-10 13:03:58.388887\", LAST_UPDATE_DATE=\"1997-10-10 13:03:58.388\", ACTION=\"externalFactor.RESPONSE\", STATUS=\"info\", DATA_STRING=\"<?xml version=\"1.0\" encoding=\"UTF-8\"?><externalFactorReturn><roleName>ROLE.CustomerManager</roleName><roleName>ROLE.DataSteward</roleName><pepres>false</pepres><externalFactor>false</externalFactor><parkeristrator>true</parkeristrator><current>parker</current></externalFactorReturn>" | eval data=split(data," ") | mvexpand data | eval _raw=data | fields - data ``` Everything above sets up demo data. Delete IRL ``` ``` Extract keys and values ``` | rex max_match=0 "<(?<key>[^>]+)>(?<value>[^<]+)<\/\1>" ``` Match keys and values so they stayed paired during mvexpand ``` | eval pairs=mvzip(key,value) | mvexpand pairs ``` Separate key from value ``` | eval pairs=split(pairs,",") ``` Define key=value result ``` | eval pairs=mvindex(pairs,0) . "=" . mvindex(pairs,1) | table pairs
Hi There!    I'm having the dropdown "office" in dashboard 1 as a multiselect (full office, half office), based  on the selection it should display the results in dashboard 1,    In the dashboard 1... See more...
Hi There!    I'm having the dropdown "office" in dashboard 1 as a multiselect (full office, half office), based  on the selection it should display the results in dashboard 1,    In the dashboard 1, I have a pie chart, If i click the pie chart It need to take to dashboard 2 which consists of same dropdown "office" as multiselect (full office, half office, non-compliant office),   If in case I'm clicking pie chart of dashboard 1 when office value is full office, half office, if should shows the same in dashboard 2 and in dashboard 2 has some panels, its should the using the value.  I had configured the link already, the problem is if we are adding prefix value as " and postfix " and delimiter , it will pass the same to next dashboard 2 dropdown, so that I didn't get the result of panels in dashboard 2.    I need solution for this? Thanks, Manoj Kumar S
Hello, I would like to calculate a weighted average on an average call time. The logs I have available are of this type: I want to be able to obtain the calculation of the average time this way... See more...
Hello, I would like to calculate a weighted average on an average call time. The logs I have available are of this type: I want to be able to obtain the calculation of the average time this way The formula applied is as follows:   Here is what I have done so far: index=rcd statut=OK partenaire=000000000P | eval date_appel=strftime(_time,"%b %y") | dedup nom_ws date_appel partenaire temps_rep_max temps_rep_min temps_rep_moyen nb_appel statut tranche_heure heure_appel_max | eval nb_appel_OK=if(isnotnull(nb_appel) AND statut="OK", nb_appel, null()) | eval nb_appel_KO=if(isnotnull(nb_appel) AND statut="KO",nb_appel, null()) | eval temps_rep_min_OK=if(isnotnull(temps_rep_min) AND statut="OK", temps_rep_min, null()) | eval temps_rep_min_KO=if(isnotnull(temps_rep_min) AND statut="KO",temps_rep_min, null()) | eval temps_rep_max_OK=if(isnotnull(temps_rep_max) AND statut="OK", temps_rep_max, null()) | eval temps_rep_max_KO=if(isnotnull(temps_rep_max) AND statut="KO",temps_rep_max, null()) | eval temps_rep_moyen_OK=if(isnotnull(temps_rep_moyen) AND statut="OK", temps_rep_moyen, null()) | eval temps_rep_moyen_KO=if(isnotnull(temps_rep_moyen) AND statut="KO",temps_rep_moyen, null()) | stats sum(nb_appel_OK) as nb_appel_OK, sum(nb_appel_KO) as nb_appel_KO min(temps_rep_min_OK) as temps_rep_min_OK, min(temps_rep_min_KO) as temps_rep_min_KO max(temps_rep_max_OK) as temps_rep_max_OK, max(temps_rep_max_KO) as temps_rep_max_KO, values(temps_rep_moyen_OK) AS temps_rep_moyen_OK, values(temps_rep_moyen_KO) as temps_rep_moyen_KO values(nom_ws) as nom_ws, values(date_appel) as date_appel | eval temps_rep_moyen_KO_calcul=sum(temps_rep_moyen_KO*nb_appel_KO)/(nb_appel_KO) | eval temps_rep_moyen_OK_calcul=sum(temps_rep_moyen_OK*nb_appel_OK)/(nb_appel_OK) | fields - tranche_heure_bis , tranche_heure_partenaire | sort 0 tranche_heure |table nom_ws partenaire date_appel nb_appel_OK nb_appel_KO temps_rep_min_OK temps_rep_min_KO temps_rep_max_OK temps_rep_max_KO temps_rep_moyen_OK temps_rep_moyen_KO     I cannot get the final average_ok time displayed temps_moyen= [(nb_appel_1 * temps_moyen 1)+(nb_appel_2 * temps_moyen 2)+...)/sum of nb_appel . I really need help please. Thank you so much    
Hi can you show your props.conf for that part?  Where you have defined that extractions for field which still have that data? Is there possibility that you have 1st defined additional field and aft... See more...
Hi can you show your props.conf for that part?  Where you have defined that extractions for field which still have that data? Is there possibility that you have 1st defined additional field and after that you apply SEDCMD to raw data? That is common mistake on search time (of course you couldn't use SEDCMD on search time) that you forgot to mask/change both raw data and extracted field. r. Ismo  
No problems with permissions, diskusage ++. I think it's a global problems. I know that for some days ago I tried to setup pkcs12 certificate (estreamer)  on splunk server.   But can't remember where... See more...
No problems with permissions, diskusage ++. I think it's a global problems. I know that for some days ago I tried to setup pkcs12 certificate (estreamer)  on splunk server.   But can't remember where I did these settings.  Out form commands:  $ source /home/splunk/bin/setSplunkEnv && df -H $SPLUNK_HOME $splunk_db Tab-completion of "splunk <verb> <object>" is available. Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-home 886G 587G 300G 67% /home $ $sudo /home/splunk/bin/splunk btool indexes list volume |egrep '(\[|path)' [volume:_splunk_summaries] path = $SPLUNK_DB $df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/centos-root 52403200 11477568 40925632 22% / devtmpfs 16312676 0 16312676 0% /dev tmpfs 16329816 0 16329816 0% /dev/shm tmpfs 16329816 10560 16319256 1% /run tmpfs 16329816 0 16329816 0% /sys/fs/cgroup /dev/sda3 1038336 173348 864988 17% /boot /dev/mapper/centos-home 865131800 558906488 306225312 65% /home tmpfs 3265964 12 3265952 1% /run/user/42 tmpfs 3265964 0 3265964 0% /run/user/1001 $  
Hi I think that this alert_action.conf error is still under the fixing? You could get ride of that execve error by disabling boot-start and then enabling it again? r. Ismo
Hi When you have run that command have you gotten any error/warnigs? Have you try this? sudo -uroot bash $SPLUNK_HOME/bin/splunk enable boot-start -user splunk -systemd-managed 1 In current linux... See more...
Hi When you have run that command have you gotten any error/warnigs? Have you try this? sudo -uroot bash $SPLUNK_HOME/bin/splunk enable boot-start -user splunk -systemd-managed 1 In current linux versions it's usually better to run splunk under systemd than old init. But if you still want to use init then you must also update those startup scripts as this instructions said https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/ConfigureSplunktostartatboottime r. Ismo 
Hi If you cannot get any new data then mos obvious reason is that you have that disk space full. Second one is that for some reason your permissions / ownerships have changed on disk. Please try "s... See more...
Hi If you cannot get any new data then mos obvious reason is that you have that disk space full. Second one is that for some reason your permissions / ownerships have changed on disk. Please try "source /opt/splunk/bin/setSplunkEnv && df -H $SPLUNK_HOME $SPLUNK_DB" as a root on cmd line. Also check if you have volumes in use and check that disk space also. To find volumes you should login as splunk user and then use splunk btool indexes list volume|egrep '(\[|path)' Which show those physical disk areas what those are using. If there are enough space left then you should check ownership of those directories / files and change those if needed. Did I understand right that you get some new data into _audit index, but not anywhere else? r. Ismo 
As it has said earlier all queries from _internal logs works only if you have those still on indexers. Quite often retention time for those is so short that you haven't those on any larger environment!
Thanks.  No problems with persmissions.  It could be something wrong with with some confiles.  But since the proplems  involves all indexfiles it must be something global settings, or some services/p... See more...
Thanks.  No problems with persmissions.  It could be something wrong with with some confiles.  But since the proplems  involves all indexfiles it must be something global settings, or some services/program not running.  Do you thinks it's best to backup $SPLUNK/etc, run installation/upgrade and next copy etc files into new installation.  Geir
If you have defined all peers via adding cluster as a search target, then just on cm "splunk remove cluster-peers <GUID>" should be enough to remove that from CM's search peer list after you have rem... See more...
If you have defined all peers via adding cluster as a search target, then just on cm "splunk remove cluster-peers <GUID>" should be enough to remove that from CM's search peer list after you have remove that peer from cluster. If this didn't work then you should create a support case to splunk. Of course if you have configured manually something extra to your heart report then you probably need to update it? See https://docs.splunk.com/Documentation/Splunk/9.1.1/DMC/Configurefeaturemonitoring#:~:text=Log%20in%20to%20Splunk%20Web,description%20of%20each%20feature%20indicator.  
and for Ubuntu when you try to start it manually does it start or gives the same errors?
for windows the service status should be set to automatic for it to start on boot.
or you are getting any permissions issue on splunk.
Did you try btool to check your configs, indexes.conf , inputs etc. may be there is a overlapping setting routing data somewhere else.
Still having this error with 9.0.4 I'm afraid.     50b81383ef0d:/opt/splunkforwarder/bin# ./splunk start --accept-license --answer-yes --no-prompt Warning: Attempting to revert the SPLUNK_HOME own... See more...
Still having this error with 9.0.4 I'm afraid.     50b81383ef0d:/opt/splunkforwarder/bin# ./splunk start --accept-license --answer-yes --no-prompt Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" This appears to be your first time running this version of Splunk. Creating unit file... Error calling execve(): No such file or directory Error launching command: No such file or directory Failed to create the unit file. Please do it manually later. Splunk> The Notorious B.I.G. D.A.T.A. Checking prerequisites... Checking mgmt port [8089]: open Creating: /opt/splunkforwarder/var/lib/splunk Creating: /opt/splunkforwarder/var/run/splunk Creating: /opt/splunkforwarder/var/run/splunk/appserver/i18n Creating: /opt/splunkforwarder/var/run/splunk/appserver/modules/static/css Creating: /opt/splunkforwarder/var/run/splunk/upload Creating: /opt/splunkforwarder/var/run/splunk/search_telemetry Creating: /opt/splunkforwarder/var/spool/splunk Creating: /opt/splunkforwarder/var/spool/dirmoncache Creating: /opt/splunkforwarder/var/lib/splunk/authDb Creating: /opt/splunkforwarder/var/lib/splunk/hashDb Checking conf files for problems... Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false). Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-9.0.4-de405f4a7979-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done      
Bit of an old post but I had this exact error, spent way too long troubleshooting it, and was saddened when this post didnt have an accepted solution. The problem is, the s3 vpc endpoint you are usi... See more...
Bit of an old post but I had this exact error, spent way too long troubleshooting it, and was saddened when this post didnt have an accepted solution. The problem is, the s3 vpc endpoint you are using DOES NOT match the supported format Splunk expects.  Then, when it tries to do hostname validation (s3 VPC endpoint) against the expected format, it fails and throws this ugly error. you said: "As part of the Splunk AWS Add-on naming convention for private endpoints, the Private Endpoint URL for the S3 bucket must be https://vpce-<endpoint_id>-<unique_id>.s3.<region>.vpce.amazonaws.com" This isnt true, as the docs explain.  You actually need to use: Thus, the format for S3 is actually https://bucket.vpce-<endpoint_id>-<unique_id>.s3.<region>.vpce.amazonaws.com.  I didnt read the documentation closely enough and wasted a lot of time.. so I hope this helps someone.  
Hello community, I have come across the issue when I got identical token generated for SOAR user "REST" that I am using for SIEM-SOAR integration and the same was in the Splunk app for SOAR. When I... See more...
Hello community, I have come across the issue when I got identical token generated for SOAR user "REST" that I am using for SIEM-SOAR integration and the same was in the Splunk app for SOAR. When I run "test connectivity" command on the SOAR Server Configuration, it responded with "Authentication Failed: Invalid token". I have just regenerated the token and everything works like a charm. Have you ever encountered such issue?
There is not really enough information here to be able to easily help you. Please can you share your full search and some anonymised sample events for the volunteers to work with.
Hi, I would like to export a table to csv in Dashboard studio. Unfortunately when I click on export only a png is exported. Any Hint? Thank you  Best regards Marta