All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Guys, I am using multiple keywords to get count of errors from different message.So i am trying case statement to acheive it. index="mulesoft" applicationName="api" environment="*" (message="Con... See more...
Hi Guys, I am using multiple keywords to get count of errors from different message.So i am trying case statement to acheive it. index="mulesoft" applicationName="api" environment="*" (message="Concur Ondemand Started") OR (message="API: START: /v1/fin_Concur") OR (message="*(ERROR): concur import failed for file*") OR (tracePoint="EXCEPTION") | dedup correlationId | eval JobName=case(like('message',"Concur Ondemand Started") OR like('message',"API: START: /v1/fin_Concur%") AND like('tracePoint',"EXCEPTION"),"EXPENSE JOB",like('message',"%(ERROR): concur import failed for file%"),"ACCURAL JOB") | stats count by JobName But i am getting only EXPENSE JOB JobName.But when i split into two query both JobName having result .
Below query i am using to get the list of all indexes | eventcount summarize=false index=* | dedup index | fields index  `dm_mapped_indexes` This macros contain the list of indexes.  Now i want to... See more...
Below query i am using to get the list of all indexes | eventcount summarize=false index=* | dedup index | fields index  `dm_mapped_indexes` This macros contain the list of indexes.  Now i want to filter all the indexes from these macros " `dm_mapped_indexes`" and get all the other indexes.
Hello,  This question has probably been asked and answered, but I just can't seem to find a best solution.  So, in the results I want to table the Allow and Deny values. And the second result wo... See more...
Hello,  This question has probably been asked and answered, but I just can't seem to find a best solution.  So, in the results I want to table the Allow and Deny values. And the second result would be the | search Action=eks* only where the Effect is Allow. I have tried till now, but I cannot relate the Allow action, it lists all values. Thanks in Advance.  { "Action": [ "eks:*", "ecs:*" ], "Effect": "Allow"   }, { "Action": [ "config:*", "budgets:*" ], "Effect": "Deny", }
    Dataframe row : {"_c0":{"0":"{","1":" \"0\": {","2":" \"jobname\": \"A001_GVE_ADHOC_AUDIT\"","3":" \"status\": \"ENDED NOTOK\"","4":" \"Timestamp\": \"20240317 13:25:23\"","5":" }","6":" \"1\":... See more...
    Dataframe row : {"_c0":{"0":"{","1":" \"0\": {","2":" \"jobname\": \"A001_GVE_ADHOC_AUDIT\"","3":" \"status\": \"ENDED NOTOK\"","4":" \"Timestamp\": \"20240317 13:25:23\"","5":" }","6":" \"1\": {","7":" \"jobname\": \"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS\"","8":" \"status\": \"ENDED NOTOK\"","9":" \"Timestamp\": \"20240317 13:25:23\"","10":" }","11":" \"2\": {","12":" \"jobname\": \"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS_WEEKLY\"","13":" \"status\": \"ENDED NOTOK\"","14":" \"Timestamp\": \"20240317 13:25:23\"","15":" }","16":" \"3\": {","17":" \"jobname\": \"D001_GVE_SOFT_MATCHING_GDH_CA\"","18":" \"status\": \"ENDED NOTOK\"","19":" \"Timestamp\": \"20240317 13:25:23\"","20":" }","21":" \"4\": {","22":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TRX_ORG\"","23":" \"status\": \"ENDED NOTOK\"","24":" \"Timestamp\": \"20240317 13:25:23\"","25":" }","26":" \"5\": {","27":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_123\"","28":" \"status\": \"ENDED NOTOK\"","29":" \"Timestamp\": \"20240317 13:25:23\"","30":" }","31":" \"6\": {","32":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_45\"","33":" \"status\": \"ENDED OK\"","34":" \"Timestamp\": \"20240317 13:25:23\"","35":" }","36":" \"7\": {","37":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_ENPW\"","38":" \"status\": \"ENDED NOTOK\"","39":" \"Timestamp\": \"20240317 13:25:23\"","40":" }","41":" \"8\": {","42":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_T\"","43":" \"status\": \"ENDED NOTOK\"","44":" \"Timestamp\": \"20240317 13:25:23\"","45":" }","46":" \"9\": {","47":" \"jobname\": \"DREAMPC_CALC_ML_NAMESAPCE\"","48":" \"status\": \"ENDED NOTOK\"","49":" \"Timestamp\": \"20240317 13:25:23\"","50":" }","51":" \"10\": {","52":" \"jobname\": \"DREAMPC_MEMORY_AlERT_SIT\"","53":" \"status\": \"ENDED NOTOK\"","54":" \"Timestamp\": \"20240317 13:25:23\"","55":" }","56":" \"11\": {","57":" \"jobname\": \"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS\"","58":" \"status\": \"ENDED NOTOK\"","59":" \"Timestamp\": \"20240317 13:25:23\"","60":" }","61":" \"12\": {","62":" \"jobname\": \"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","63":" \"status\": \"ENDED NOTOK\"","64":" \"Timestamp\": \"20240317 13:25:23\"","65":" }","66":" \"13\": {","67":" \"jobname\": \"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS\"","68":" \"status\": \"ENDED OK\"","69":" \"Timestamp\": \"20240317 13:25:23\"","70":" }","71":" \"14\": {","72":" \"jobname\": \"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","73":" \"status\": \"ENDED OK\"","74":" \"Timestamp\": \"20240317 13:25:23\"","75":" }","76":" \"15\": {","77":" \"jobname\": \"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS\"","78":" \"status\": \"ENDED OK\"","79":" \"Timestamp\": \"20240317 13:25:23\"","80":" }","81":" \"16\": {","82":" \"jobname\": \"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","83":" \"status\": \"ENDED OK\"","84":" \"Timestamp\": \"20240317 13:25:23\"","85":" }","86":" \"17\": {","87":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH\"","88":" \"status\": \"ENDED OK\"","89":" \"Timestamp\": \"20240317 13:25:23\"","90":" }","91":" \"18\": {","92":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH_WEEKLY\"","93":" \"status\": \"ENDED OK\"","94":" \"Timestamp\": \"20240317 13:25:23\"","95":" }","96":" \"19\": {","97":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_SAMCONTDEPOT\"","98":" \"status\": \"ENDED NOTOK\"","99":" \"Timestamp\": \"20240317 13:25:23\"","100":" }","101":" \"20\": {","102":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TLXLSP_TRXN\"","103":" \"status\": \"ENDED NOTOK\"","104":" \"Timestamp\": \"20240317 13:25:23\"","105":" }","106":" \"21\": {","107":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR\"","108":" \"status\": \"ENDED OK\"","109":" \"Timestamp\": \"20240317 13:25:23\"","110":" }","111":" \"22\": {","112":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR_WEEKLY\"","113":" \"status\": \"ENDED OK\"","114":" \"Timestamp\": \"20240317 13:25:23\"","115":" }","116":" \"23\": {","117":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON\"","118":" \"status\": \"ENDED NOTOK\"","119":" \"Timestamp\": \"20240317 13:25:23\"","120":" }","121":" \"24\": {","122":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON_WEEKLY\"","123":" \"status\": \"ENDED OK\"","124":" \"Timestamp\": \"20240317 13:25:23\"","125":" }","126":" \"25\": {","127":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI\"","128":" \"status\": \"ENDED NOTOK\"","129":" \"Timestamp\": \"20240317 13:25:23\"","130":" }","131":" \"26\": {","132":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI_WEEKLY\"","133":" \"status\": \"ENDED NOTOK\"","134":" \"Timestamp\": \"20240317 13:25:23\"","135":" }"  
We are planning to integrate our SAP BTP Fiori app with Cisco AppDynamics and need some guidance. Could you please provide us with information on the following: The initial setup required in AppDy... See more...
We are planning to integrate our SAP BTP Fiori app with Cisco AppDynamics and need some guidance. Could you please provide us with information on the following: The initial setup required in AppDynamics for SAP Fiori apps. Any specific agents or SDKs we should use for monitoring Fiori apps. How to set up custom metrics and configure alerts for Fiori apps. Tips on troubleshooting common issues during integration. We appreciate any documentation, resources, or advice you can share to help us ensure a smooth integration.
Hello Everyone I have Splunk Enterprise installed in a Centos 7 linux OS I have added csv data and i wish to build a dashboard however when attempting to add a background image i am getting the 5... See more...
Hello Everyone I have Splunk Enterprise installed in a Centos 7 linux OS I have added csv data and i wish to build a dashboard however when attempting to add a background image i am getting the 503 -service unavailable error When running the status start  command this is what i get : Splunk> Finding your faults, just like mom. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _configtracker _dsappevent _dsclient _dsphonehome _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket history keepereu main summary Done Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk-9.2.0.1-d8ae995bf219-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done [ OK ] Waiting for web server at http://127.0.0.1:8000 to be available..................... When running the status command i get [root@localhost splunk]# /opt/splunk/bin/splunk status splunkd is running (PID: 2614). splunk helpers are running (PIDs: 2640 3079 3140 10111 10113 10144 10155 10172 10692 10808 11367 21735 24183). [root@localhost splunk]# What logs should i inspect to better understand why this is happening?    
Hi, I have requirement as below, please could you review and suggest ? Need to pick up all client ids from application log called "Cos" (index=a sourcetype=Cos ) where distinct client ids are in 6M... See more...
Hi, I have requirement as below, please could you review and suggest ? Need to pick up all client ids from application log called "Cos" (index=a sourcetype=Cos ) where distinct client ids are in 6Millions. And, I want to compare whether these clients ids are present in  another application log called "Ma" (index=a sourcetype=Ma). And, I also want to compare the same in another application  called "Ph" (index=a sourcetype=Ph) Basically trying to get the count/volume based on the client id, which is common among the 3 application (Cos, Ma,Ph). The total events are in Millions and when i use join, the search job is getting auto-cancelled or getting terminated. (index=a sourcetype=Cos) OR (index=a sourcetype=Ma) OR (index=a sourcetype=Ph) stats count by clientid, sourcetype   Thanks, Selvam.
When I doing splunkforwarder version upgrade to 9.X which always failed due to below error - Migration information is being logged to '/opt/splunkforwarder/var/log/splunk/migration.log.2024-03-25.18... See more...
When I doing splunkforwarder version upgrade to 9.X which always failed due to below error - Migration information is being logged to '/opt/splunkforwarder/var/log/splunk/migration.log.2024-03-25.18-09-26' -- Error calling execve(): No such file or directory Error launching command: No such file or directory   As per the discussion on the link:   https://community.splunk.com/t5/Installation/Upgrading-Universal-Forwarder-8-x-x-to-9-x-x-does-not-work/m-p/665668  who said we have to enable the tty env option on the docker runtime to successfully bring up Splunkforward9.X Indeed I added the tty config on my docker compose file and it works. But I would say it is werse, bad workaround way to bring up splunkforwarder9.X.  Why the forwarder9.X version force ask for the tty terminal env to run up?  Can we remove this restriction? In many case, we have to bring up splunkforwarder instance within a background program but not in a terminal, and for some case we have to use process manager to control splunkforwarder start resume... Anyway, can we remove the tty restriction for newer splunkforwarder9.X just like what it did on 8.X and 7.X  
Hi, I am using Splunk Dashboard with SimpleXML formatting. This is my Current Code for my Dashboard. * Query is masked * The Structure is defined as it is.     <row> <panel> <html d... See more...
Hi, I am using Splunk Dashboard with SimpleXML formatting. This is my Current Code for my Dashboard. * Query is masked * The Structure is defined as it is.     <row> <panel> <html depends="$alwaysHideCSS$"> <style> #table_ref_base{ width:50% !important; float:left !important; height: 800px !important; } #table_ref_red{ width:50% !important; float:right !important; height: 400px !important; } #table_ref_org{ width:50% !important; float:right !important; height: 400px !important; } </style> </html> </panel> </row> <row> <panel id="table_ref_base"> <table> <title>Signals from Week $tk_chosen_start_wk$ ~ Week $tk_chosen_end_wk$</title> <search id="search_ref_base"> <query></query> <earliest>$tk_search_start_week$</earliest> <latest>$tk_search_end_week$</latest> </search> <option name="count">30</option> <option name="dataOverlayMode">none</option> <option name="drilldown">row</option> <option name="percentagesRow">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel id="table_ref_red"> <table> <title> (Red) - Critical/Severe Detected (Division_HQ/PG2/Criteria/Value)</title> <search base="search_ref_base"> <query></query> </search> <option name="count">5</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel id="table_ref_org"> <table> <title>🟠 (Orange) - High/Warning Detected (Division_HQ/PG2/Criteria/Value)</title> <search base="search_ref_base"> <query></query> </search> <option name="count">5</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>       However, my dashboard shows up with this picture below. I thought by defining 800px on the left panel and 400px on both right panel would end up like the preferred dashboard page as above(right), but it gave me a result(left): Here is also the result of my current Dashboard.  As you can see, It also returns me a needless white space below:   Thanks for your help!   Sincerely,  Chung
we are trying to set up a cron schedule on alert to run only on weekends(sat and sun) at 6am, 12pm, 8pm , 10pm i tired giving below cron in Splunk, it is saying invalid cron, can any one help on thi... See more...
we are trying to set up a cron schedule on alert to run only on weekends(sat and sun) at 6am, 12pm, 8pm , 10pm i tired giving below cron in Splunk, it is saying invalid cron, can any one help on this??  
Hi,  I have an app that ingest offenses from a SIEM system (qradar).  One time there were a few thousands offenses to ingest at the same time, and it caused to an error in the app ingestion. But non... See more...
Hi,  I have an app that ingest offenses from a SIEM system (qradar).  One time there were a few thousands offenses to ingest at the same time, and it caused to an error in the app ingestion. But none of the offenses were ingested for a few hours. Is there a way to alert when there is an ingestion error for an app, and maybe a way to fix it?
I have a dashboard where I have 4 multi select boxes and a input file with all possible results for each app.  When there are no results for an app it is sent as a 100%.  Problem is that the results ... See more...
I have a dashboard where I have 4 multi select boxes and a input file with all possible results for each app.  When there are no results for an app it is sent as a 100%.  Problem is that the results have all apps and ignore the multi-select because of the input file.  Below is the code.... data.environment.application data.environment.environment data.environment.stack data.componentId app1 prod AZ Acomp app1 prod AZ Bcomp app2 uat AW Zcomp app2 uat AW Ycomp app2 uat AW Xcomp app3 prod GC Mcomp   index=MINE data.environment.application="app2" data.environment.environment="uat" | eval estack="AW" | fillnull value="uat" estack data.environment.stack | where 'data.environment.stack'=estack | streamstats window=1 current=False global=False values(data.result) AS nextResult BY data.componentId | eval failureStart=if((nextResult="FAILURE" AND 'data.result'="SUCCESS"), "True", "False"), failureEnd=if((nextResult="SUCCESS" AND 'data.result'="FAILURE"), "True", "False") | transaction data.componentId, data.environment.application, data.environment.stack startswith="failureStart=True" endswith="failureEnd=True" maxpause=15m | stats sum(duration) as downtime by data.componentId | inputlookup append=true all_env_component.csv | fillnull value=0 | addinfo | eval uptime=(info_max_time - info_min_time)-downtime, avail=(uptime/(info_max_time - info_min_time))*100, downMins=round(downtime/60, 0) | rename data.componentId AS Component, avail AS Availability | fillnull value=100 Availability | dedup Component | table Component, Availability Thank you in advance for the help.
I want to add C:\windows\system32\winevt\logs\Microsoft-Windows-DriverFrameworks-UserMode/Operational  as a stanza in my inputs.conf. How do I write the stanza? Thank you
Is it possible in Splunk to have one props.conf file on one server's Universal Forwarder (UF) for a specific app, and another props.conf file on a different server for the same app, but with one file... See more...
Is it possible in Splunk to have one props.conf file on one server's Universal Forwarder (UF) for a specific app, and another props.conf file on a different server for the same app, but with one file masking a certain field and the other not?
I'm trying to achieve the following and hoped someone could help? I have a multivalue field that contains values that are colors, and would like to know how many fields contain duplicate colors, and... See more...
I'm trying to achieve the following and hoped someone could help? I have a multivalue field that contains values that are colors, and would like to know how many fields contain duplicate colors, and what the value of those colors are. e.g. my data colors blue blue red yellow red blue red blue red red green green Would return something like: duplicate_color duplicate_count blue 2 red 1 green 1 Because 'blue' is present as a duplicate in two entries, 'red' in one entry, and 'green' in one entry. 'yellow' is omitted because it is not a duplicate. Thank you very much for any help Steve
Hello, I have a splunk query returning my search results     index="demo1" source="demo2" | rex field=_raw "id_num \{ data: (?P<id_num>\d+) \}" | rex field=_raw "test_field_name=(?P<test_field_na... See more...
Hello, I have a splunk query returning my search results     index="demo1" source="demo2" | rex field=_raw "id_num \{ data: (?P<id_num>\d+) \}" | rex field=_raw "test_field_name=(?P<test_field_name>.+)]:" | search test_field_name=test_field_name_1 | table _raw id_num | reverse | filldown id_num     From above table  _raw may have *fail_msg1* or *fail_msg2* I have created a lookup file sample.csv with the following content     Product,Feature,FailureMsg ABC,DEF,fail_msg1 ABC,DEF,fail_msg2     I want to search if FailureMsg field (fail_msg1 OR fail_msg2) is found in _raw of my splunk query search results and return only those matching lines. If they (fail_msg1 OR fail_msg2) are not found, return nothing Could you please share how to write lookup or inputlookup for fetching these results? If those   
I have a mixed data of ADFS logs, mixed in the sense, I have non XML as well as XML formatted data in the same event. Now my requirement is to extract the field from XML format .   Ex:- <abc>WoW<... See more...
I have a mixed data of ADFS logs, mixed in the sense, I have non XML as well as XML formatted data in the same event. Now my requirement is to extract the field from XML format .   Ex:- <abc>WoW</abc> <xyz>SURE</xyz>   Now, both the lines are in the same event. I want to have two fields called "abc" and "xyz" with the corresponding value WoW and SURE.   Kindly help !!
I have two lookups, 1 with 460K rows and another with 10K rows.  I used join to get the 10K results from 460K rows, however join is not working and not returning any results.  I used table and stat... See more...
I have two lookups, 1 with 460K rows and another with 10K rows.  I used join to get the 10K results from 460K rows, however join is not working and not returning any results.  I used table and stats in both lookups though no results.    Below is the query I used:  | inputlookup unix.csv | eval sys_name = lower(FQDN) | join sys_name [| inputlookup inventory.csv | eval sys_name = lower("*".sys_name."*") | table Status sys_name host-ip  "DNS Name"  ] &  | inputlookup unix.csv | eval sys_name = lower(FQDN) |stats values(*) as * by sys_name | join sys_name [| inputlookup inventory.csv | eval sys_name = lower("*".sys_name."*") | table Status sys_name host-ip  "DNS Name"  ] Any help would be greatly appreciated. 
Just scanning the $SPLUNK_HOME/etc/system/default/*.conf files for boolean values show a huge disparity.  "0" and "1" exceed "true/false" or "True/False" in commonality.  If linted against the .spec ... See more...
Just scanning the $SPLUNK_HOME/etc/system/default/*.conf files for boolean values show a huge disparity.  "0" and "1" exceed "true/false" or "True/False" in commonality.  If linted against the .spec files, most of these would fail.  Is there person that needs to see this to get it changed and self-consistent on the default values?  The vendor defaults should be the gold standard to measure against.  Any and all comments and how I might pursue resolution are welcome. 
Hi Guys, Thanks in Advance, I have a task that  I need to pass parameter to splunk  from external website. And i already have dashboard .So based on correlationId we need to populate the result in ... See more...
Hi Guys, Thanks in Advance, I have a task that  I need to pass parameter to splunk  from external website. And i already have dashboard .So based on correlationId we need to populate the result in splunk.How to pass parameters from external website to splunk