All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to merge Splunk search query with a database query result set. Basically I have a Splunk dbxquery 1 which returns userid and email from database as follows for a particualr user id:   |... See more...
I am trying to merge Splunk search query with a database query result set. Basically I have a Splunk dbxquery 1 which returns userid and email from database as follows for a particualr user id:   | dbxquery connection="CMDB009" query="SELECT dra.value, z.email FROM DRES_PRINTABLE z, DRES.CREDENTIAL bc, DRES.CRATTR dra WHERE z.userid = bc.drid AND z.drid = dra.dredid AND dra.value in ('xy67383') "   Above query outputs VALUE EMAIL xv67383 xyz@test.com   Another query is a Splunk query 2 that provides the user ids as follows:   index=index1 (host=xyz OR host=ABC) earliest=-20m@m | rex field=_raw "samlToken\=(?>user>.+?):" | join type=outer usetime=true earlier=true username,host,user [search index=index1 source="/logs/occurences.log" SERVER_SERVER_CONNECT NOT AMP earliest=@w0 | rex field=_raw "Origusername\((?>username>.+?)\)" | rex field=username"^(?<user>,+?)\:" | rename _time as epoch1] | "stats count by user | sort -count | table user   This above query 2 returns a column called user but not email.   What I want to do is add a column called email from splunk dbxquery 1 for all matching rows by userid in output of query 1. Basically want to add email as additional field for each user returned in query 2.   What I tried so far is this but it does not give me any results. Any help would be appreciated.   index=index1 (host=xyz OR host=ABC) earliest=-20m@m | rex field=_raw "samlToken\=(?>user>.+?):" | join type=outer usetime=true earlier=true username,host,user [search index=index1 source="/logs/occurences.log" SERVER_SERVER_CONNECT NOT AMP earliest=@w0 | rex field=_raw "Origusername\((?>username>.+?)\)" | rex field=username"^(?<user>,+?)\:" | rename _time as epoch1] | "stats count by user | sort -count | table user | map search="| | dbxquery connection=\"CMDB009\" query=\"SELECT dra.value, z.email FROM DRES_PRINTABLE z, DRES.CREDENTIAL bc, DRES.CRATTR dra WHERE z.userid = bc.drid AND z.drid = dra.dredid AND dra.value in ('$user'):\""   Thanks,      
Hi, This add-on is to ingest MCAS logs into splunk? Or do we need to use syslog collectors to ingest the MCAS logs? and this add-on is to only ingest incidents and alserts?  
Hi all, Currently I receive the logs from the fortinet source without problems, but what worries me is that there is a large consumption of licensing. When I make a query I find that 94% of the log... See more...
Hi all, Currently I receive the logs from the fortinet source without problems, but what worries me is that there is a large consumption of licensing. When I make a query I find that 94% of the logs that arrive are type "Notice" Would it be a bad idea to ask the administrator not to send this type of log? In case of blocking this type of log, what important information would you miss?    
Hi, i have two panels in a Dashboard but i need to display only one but when i click on first panel then the second panel should appear, please help me to get this complete.   below is my code... See more...
Hi, i have two panels in a Dashboard but i need to display only one but when i click on first panel then the second panel should appear, please help me to get this complete.   below is my code,   <dashboard version="1.1"> <label>VLS_Customer Dashboard Clone</label> <row> <panel> <chart> <search> <query>|inputlookup Samplebanking1.csv |stats count by "DEPARTMENT CODE"</query> <earliest>0</earliest> <latest></latest> </search> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel> <chart> <search> <query>|inputlookup Samplebanking1.csv |stats count by STATUS</query> <earliest>-30m@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">minmax</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">right</option> <option name="charting.lineWidth">2</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row> </dashboard>  
Hi, Splunkers, I have a splunk search table as below, I want to add a duration column for each record, using its timestamp subtract the timestamp of the previous record,   for example, here for rec... See more...
Hi, Splunkers, I have a splunk search table as below, I want to add a duration column for each record, using its timestamp subtract the timestamp of the previous record,   for example, here for record 2, duration should be 2021-12-14 12:55:25.258  - 2021-12-14 12:55:03.339 _time                                               columna 2021-12-14 12:55:03.339   abc 2021-12-14 12:55:25.258   efg 2021-12-14 12:55:25.336   hij any help would be appreciated. Kevin
I'm trying to set a new dashboard token on click of a country in a choropleth that would populate with the iso2 value of that country and feed other panels, but the only option in dashboard studio I ... See more...
I'm trying to set a new dashboard token on click of a country in a choropleth that would populate with the iso2 value of that country and feed other panels, but the only option in dashboard studio I have is for a custom url.  Is there a way to accomplish what I'm trying to do using Dashboard studio?  
Our Co. decided to remove ITSI a few months ago & am learning that it has dependent Apps that it comes with that I need to search & remove. Does any champs here know the remnants & related Apps that ... See more...
Our Co. decided to remove ITSI a few months ago & am learning that it has dependent Apps that it comes with that I need to search & remove. Does any champs here know the remnants & related Apps that are installed on Splunk servers after ITSI is installed? Please share list & how to search & remove them. Thx a million.
I have a job that we run on demand that creates a new log for the job. it's formatted Name.YYYYMMDDhhmmss.log each line in the log has a timestamp MM/DD/YYYY hh:mm:ss The last line of the log wou... See more...
I have a job that we run on demand that creates a new log for the job. it's formatted Name.YYYYMMDDhhmmss.log each line in the log has a timestamp MM/DD/YYYY hh:mm:ss The last line of the log would say "EXIT STATUS = 0" Sometimes though, the job can have a problem and stops logging and sending to splunk. Is there any way to throw an alert when the job stop running and say an hour passes without any new logging from the latest Name.YYYYMMDDhhmmss.log file? Thanks
Good Afternoon,     I am having an issue with the ThreatConnect TA. The API appears to be connecting as expected but no logs are in the index. I observed within splunkd.log the log sample found belo... See more...
Good Afternoon,     I am having an issue with the ThreatConnect TA. The API appears to be connecting as expected but no logs are in the index. I observed within splunkd.log the log sample found below. Looking at the props.conf, it appears it is configured correctly. has anyone had this issue? it appears the logs are in epoch time. 12-15-2021 20:36:44.013 +0000 WARN DateParserVerbose [38565 merging_1] - The TIME_FORMAT specified is matching timestamps (INVALID_TIME (1639600603907184)) outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source=tc_download_indicators.py|host=127.0.0.1|threatconnect-app-logs|10128188 PROPS.CONF config [threatconnect-app-logs] INDEXED_EXTRACTIONS = json NO_BINARY_CHECK = true TIMESTAMP_FIELDS = timestamp TIME_FORMAT = %s.%6N category = Application description = ThreatConnect App Logs pulldown_type = 1 [threatconnect-event-data] INDEXED_EXTRACTIONS = json NO_BINARY_CHECK = true TIMESTAMP_FIELDS = timestamp TIME_FORMAT = %s.%6N category = Application description = ThreatConnect Matched Event Data pulldown_type = 1 [source::...tc_ar_send_to_playbook.log*] sourcetype = send_event_to_threatconnect_playbook:*    
I have many large jobs that take for ever to run sometimes 18-30 hours. Eventually error out. How to make a list. I have already tried below posted related to this subject but it does not list the la... See more...
I have many large jobs that take for ever to run sometimes 18-30 hours. Eventually error out. How to make a list. I have already tried below posted related to this subject but it does not list the large (bad boys) large jobs: https://community.splunk.com/t5/Splunk-Search/What-causes-delayed-searches-alerts-in-Splunk-Enterprise-Error/m-p/545405 & Searches are delayed when there are no resources available at run-time and they have a non-zero Schedule Window.  The delay lasts until the schedule window closes.  If, at that time, the search still can't run then it becomes "skipped". To resolve it, re-schedule the searches so fewer are scheduled at the same time.  Pay particular attention to the :00, :15, :30, and :45 minutes of each hour.  See https://github.com/dpaper-splunk/public/blob/master/dashboards/extended_search_reporting.xml for a helpful dashboard. Just copy paste it to your node where you have those delayed searches as a dashboard. Another option is use MC's Search -> Scheduler and look there what those searches are. Anyhow you should look that time by time or create alert to inform you if there are lot of skipped or delayed searches.
Hi, Below is my Log: "{"log":"{'URI': '/api/**/***/search?', 'METHOD': 'POST', 'FINISH_TIME': '2021-Dec-15 12:15:04 CST', 'PROTOCOL': 'http', 'RESPONSE_CODE': 202, 'RESPONSE_STATUS': '202 ACCEPTED'... See more...
Hi, Below is my Log: "{"log":"{'URI': '/api/**/***/search?', 'METHOD': 'POST', 'FINISH_TIME': '2021-Dec-15 12:15:04 CST', 'PROTOCOL': 'http', 'RESPONSE_CODE': 202, 'RESPONSE_STATUS': '202 ACCEPTED', 'RESPONSE_TIME': 4.114464243873954} ","service_name":"Digdug/digdug","container":"Digdug-digdug-2","environment":"PROD"}"   Want to extract "RESPONSE_CODE" value  And show like below    RESPONSE_CODE Count 202 1 200 6 Thanks
To predict traffic, I'm building a time series model (ARIMA). I'm unable to save a fitted ARIMA Model.  Error: Error in 'fit' command: Algorithm "ARIMA" does not support saved models   I don't wis... See more...
To predict traffic, I'm building a time series model (ARIMA). I'm unable to save a fitted ARIMA Model.  Error: Error in 'fit' command: Algorithm "ARIMA" does not support saved models   I don't wish to retrain the arima model repeatedly as the SLAs to be met are tight in time.  Please recommend a solution to save the ARIMA model or any other algorithm/method to be able to do this better. 
Hi Could you please help me with below mentioned query How to create recurring Maintenance Window in Splunk ITSI   Thanks, Prasanth G
Hi All, I am displaying the names based on dates and used where condition to display only values that are greater than 100 (where runs  > 100 ).  Below is how the table shows , but I want to display... See more...
Hi All, I am displaying the names based on dates and used where condition to display only values that are greater than 100 (where runs  > 100 ).  Below is how the table shows , but I want to display the other values in the row with actual value instead of showing it as empty.  | where runs > 100 | xyseries Name dayOfDate runs Name Date1  Date2 Date3 Date4 Date5 Sachi 101         Kohli     108     ABD   104   105      
Hello Fellow Splunkers! I have an environment that's using Twistlock and is deployed in EKS. We are able to collect the majority of logs via Kubernetes logging, however our team really wanted to uti... See more...
Hello Fellow Splunkers! I have an environment that's using Twistlock and is deployed in EKS. We are able to collect the majority of logs via Kubernetes logging, however our team really wanted to utilize the application created for Twistlock (https://splunkbase.splunk.com/app/4555/). Has anyone else run into issues using the app for this architecture type? If not, has anyone successfully configured this application to use the predefined sourcetypes shown in the app? Any guidance will be greatly appreciated!  
Hello, splunk show-decrypted does not seem to work on UF, is there another solution to recover forgot admin password? Thanks.
Requesting assistance with removing characters from logs during search time.  Sample Data:  "{"log":"{\"@t\" "2021-12-15T16:26:36.1571090Z\",\"@m\" "\\\"http\\\" \\\"GET\\\" \\\"/api/v1/" Trying... See more...
Requesting assistance with removing characters from logs during search time.  Sample Data:  "{"log":"{\"@t\" "2021-12-15T16:26:36.1571090Z\",\"@m\" "\\\"http\\\" \\\"GET\\\" \\\"/api/v1/" Trying to remove the extra \ \\ that came with the data via HEC.
  [new] DATETIME_CONFIG=/etc/apps/Test/local/datetime.xml SHOULD_LINEMERGE=false BREAK_ONLY_BEFORE=\nExecution\sServer CHARSET=UTF-8 TIME_FORMAT= %H:%M:%S.%3N MAX_EVENTS=10000 SEDCMD-test=s/E... See more...
  [new] DATETIME_CONFIG=/etc/apps/Test/local/datetime.xml SHOULD_LINEMERGE=false BREAK_ONLY_BEFORE=\nExecution\sServer CHARSET=UTF-8 TIME_FORMAT= %H:%M:%S.%3N MAX_EVENTS=10000 SEDCMD-test=s/Ex\w.*\nS\w+.*\n+\+-.*\n\|\s+\w.*\n\+-.*|\|Ste\w.*\n\|P\w.*\n\|T\w.*\n\|V\w.*\n+\|\n\|Va\w.*|\|Para.*|\+-.*//g TRUNCATE=0
Hello, I have 10 servers for same purpose. If one server is down others will be active so that no loss of business continuity.  We have ABC.log generates across all the servers with same content. W... See more...
Hello, I have 10 servers for same purpose. If one server is down others will be active so that no loss of business continuity.  We have ABC.log generates across all the servers with same content. We need to add all the 10 servers in serverclass.conf and we did  the same. But we are getting ABC.log to splunk multiple times I.e., 5 to 6 times or one event repeating 5 to 6 times.  I appreciate any help to avoid mutiple ingestion of same log from different servers or avoid duplicate events.  Added crcSalt in inputs.conf, but not working.  Thanks 
Hello, Due to a specific requirement we have to install a Splunk Universal Forwarder acting as "intermediate forwarder". Basically it will receive data via TCP (to leverage persistent queue), and i... See more...
Hello, Due to a specific requirement we have to install a Splunk Universal Forwarder acting as "intermediate forwarder". Basically it will receive data via TCP (to leverage persistent queue), and it has to forward them in output in HTTP. Forwarding data in HTTP is possible since Splunk Universal Forwarder 8.x: https://docs.splunk.com/Documentation/Forwarder/8.2.3.1/Forwarder/Configureforwardingwithoutputs.conf#Configure_the_universal_forwarder_to_send_data_over_HTTP   Here the set-up: # inputs.conf [tcp://9997] persistentQueueSize=1000MB connection_host=none disabled=false # outputs.conf #Example from Splunk [httpout] httpEventCollectorToken = eb514d08-d2bd-4e50-a10b-f71ed9922ea0 uri = https://10.222.22.122:8088   What we also want to achieve is to forward only data received via TCP, and to do not forward the Splunk UF internal logs. I didn't found a sort of _HTTP_ROUTING setting (like for example _TCP_ROUTING) to be put in inputs.conf Therefore listing all the Splunk UF inputs with that command: /opt/splunkforwarder/bin/splunk btool inputs list --debug   I was thinking about this configuration: #props.conf [source::/opt/splunkforwarder/...] force_local_processing = true TRANSFORMS-null = setnull #transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue   Do you think it is going to work? Maybe another option could be tag TCP inputs host based on DNS or IP, and then move to nullQueue all the logs produced by the Splunk UF: #inputs [tcp://9997] persistentQueueSize=1000MB connection_host=dns disabled=false #props.conf [host::mysplunkUFhostname] force_local_processing = true TRANSFORMS-null = setnull #transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue   Do you see any other possible configuration?   Thanks a lot, Edoardo