All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, I need help with regex I have search index=* | regex Commandline="my_regular_expression" How can I add one more regular expression with OR condition? something like this | r... See more...
Hello everyone, I need help with regex I have search index=* | regex Commandline="my_regular_expression" How can I add one more regular expression with OR condition? something like this | regex Commandline="my_regular_expression" OR | regex Commandline="my_regular_expression2"   Tahnk you
Hello, I am a bit confused as to how Splunk manages its indexes through AWS cloud services, and I am not sure whether both EBS and S3 services are interchangeable for thsi type of deployment.  For e... See more...
Hello, I am a bit confused as to how Splunk manages its indexes through AWS cloud services, and I am not sure whether both EBS and S3 services are interchangeable for thsi type of deployment.  For example, is S3 only for archiving frozen buckets, or can it be used for hot/warm/cold buckets as well? Is there some documentation about best practices here?  Compare and contrast? Thanks! Andrew  
My indexer is totally full now and new items cannot be index.  The previous settings also seems to be not working.  [root@splunk-masternode local]# cat indexes.conf homePath.maxDataSizeMB = 80000 ... See more...
My indexer is totally full now and new items cannot be index.  The previous settings also seems to be not working.  [root@splunk-masternode local]# cat indexes.conf homePath.maxDataSizeMB = 80000   # Hot and Cold - External data sources [volume:secondary] path = /splunk/splunkdata maxVolumeDataSizeMB = 1650000   I have tune the maxDataSizeMB to 40000 instead and the maxVolumeDataSizeMB to 155000 instead and restarted but its not clearing off.    /dev/mapper/splunk_hotbucket-hotbucket 1.8T 1.7T 4.9G 100% /splunk/splunkdata The 1.65T limit also seems to be not working as its now 1.7T.    Anybody have any advise? This is currently my 2 indexes.conf settings.  [root@splunk-masternode local]# cat indexes.conf # VOLUME SETTINGS # In this example, the volume spec here is set to the indexer-specific # path for data storage. It satisfies the "volume:primary" tag used in # the indexes.conf which is shared between SH and indexers. # See also: org_all_indexes # One Volume for Hot and Cold - Splunk default internal indexes [volume:primary] path = /splunk/splunkdata_internal # Note: The *only* reason to use a volume is to set a cumulative size-based # limit across several indexes stored on the same partition. There are *not* # time-based volume limits. # ~5 TB maxVolumeDataSizeMB = 5120 # Hot and Cold - External data sources [volume:secondary] path = /splunk/splunkdata maxVolumeDataSizeMB = 1550000 [volume:cold] path = /splunk/splunkdata_cold #[volume:frozen] #path = /splunk/splunkdata_frozen # This setting changes the storage location for _splunk_summaries, # which should be utilized if you want to use the same partition # as specified for volume settings. Otherwise defaults to $SPLUNK_DB. # # The size setting of the volume shown below would place a limit on the # total size of data model acceleration (DMA) data. Doing so should be # carefully considered as it may have a negative impact on appilcations # like Enterprise Security. # [volume:_splunk_summaries] path = /splunk/splunkdata # ~ 100GB # maxVolumeDataSizeMB = 100000   homePath.maxDataSizeMB = 40000      
Hi All, I have this short bash script, and i want to encrypt the admin and changeme credentials, cause it is displayed on clear text.     #!/bin/bash /opt/splunk/bin/splunk set minfreemb 1000 ... See more...
Hi All, I have this short bash script, and i want to encrypt the admin and changeme credentials, cause it is displayed on clear text.     #!/bin/bash /opt/splunk/bin/splunk set minfreemb 1000 -auth admin:changeme /opt/splunk/bin/splunk edit user test01 -force-change-pass true -auth admin:changeme     Is there any way to achieve this.
Hi,  I was wondering what is the target server connected with the Splunk server which is getting updates alerts? It looks like it is not a Private network because I am still getting updates alert... See more...
Hi,  I was wondering what is the target server connected with the Splunk server which is getting updates alerts? It looks like it is not a Private network because I am still getting updates alerts.  Thanks in advance. 
I have health check file with extension .log. When I uploaded it to Splunk, it came out like this. The real file is like this Does anyone know what is the problem?    
We are using splunk version 6.2.4. Recently, I received a call saying that a vulnerability was also found in the 1.2.xx version of log4j. log4j-1.2.14jar and log4j-1.2.15jar files were found on spl... See more...
We are using splunk version 6.2.4. Recently, I received a call saying that a vulnerability was also found in the 1.2.xx version of log4j. log4j-1.2.14jar and log4j-1.2.15jar files were found on splunk. I want to know if that jar file is used and if it is vulnerable to security. thank you.
Hello all.   I was reading over the article at https://www.splunk.com/en_us/blog/security/log4shell-detecting-log4j-vulnerability-cve-2021-44228-continued.html   Specifically at the New Outbound ... See more...
Hello all.   I was reading over the article at https://www.splunk.com/en_us/blog/security/log4shell-detecting-log4j-vulnerability-cve-2021-44228-continued.html   Specifically at the New Outbound Traffic Detection with Baseline section.   Can someone explain to me the appendpipe's subsearch (I split it into parts but its actually one search) purpose and how it works?      | tstats summariesonly=false allow_old_summaries=true earliest(_time) as earliest latest(_time) as latest values(All_Traffic.action) as action values(All_Traffic.app) as app values(All_Traffic.dest_ip) as dest_ip values(All_Traffic.dest_port) as dest_port values(sourcetype) as sourcetype count from datamodel=Network_Traffic where (NOT (All_Traffic.dest_category="internal" OR All_Traffic.dest_ip=10.0.0.0/8 OR All_Traffic.dest_ip=172.16.0.0/12 OR All_Traffic.dest_ip=192.168.0.0/16 OR All_Traffic.dest_ip=100.64.0.0/10)) by All_Traffic.src_ip All_Traffic.dest_ip | rename "All_Traffic.*" as * | lookup egress_src_dest_tracker.csv dest_ip src_ip OUTPUT earliest AS previous_earliest latest AS previous_latest | eval earliest=min(earliest, previous_earliest), latest=max(latest, previous_latest) | fields - previous_* | appendpipe [ | fields src_ip dest_ip latest earliest | stats min(earliest) as earliest max(latest) as latest by src_ip, dest_ip | inputlookup append=t egress_src_dest_tracker.csv | stats min(earliest) as earliest max(latest) as latest by src_ip, dest_ip | outputlookup egress_src_dest_tracker.csv | where a=b ] | eventstats max(latest) as maxlatest | eval comparisonTime="-1h@h" | eval isOutlier=if(earliest >= relative_time(maxlatest, comparisonTime), 1, 0) | convert timeformat="%Y-%m-%dT%H:%M:%S" ctime(earliest),ctime(latest) ,ctime(maxlatest) | where isOutlier=1     I am trying to understand what this appendpipe portion is doing. Here is my current thought process: 0) It would take the result from the previous set of commands 1) summarize: latest/earlist by src/dest. 2) append the lookup 3) get the earliest/latest by src/dest again. (would the result be the same if we skipped #1?) 4) save the results 5) what does this where clause mean? There is no a or b  field that I can see.   Thanks!  
index="my_index" |eval check=if(html_code==200,"error","OK") |stats count values(clientip) as src_ip by ip , check |table src_ip , ip, check , count |collect index=error_ip_count I'm going t... See more...
index="my_index" |eval check=if(html_code==200,"error","OK") |stats count values(clientip) as src_ip by ip , check |table src_ip , ip, check , count |collect index=error_ip_count I'm going to call up "error_ip_count" after using that command. I used index=error_ip_count, but I couldn't call it up. Is there a wrong way to use it?
I'm try to disable the y-axis using similar option in line chart graph but using outlier graph it cant not hide the y-axis. Is it any method to disable this y-axis.  <panel> <viz type="Splunk_ML_To... See more...
I'm try to disable the y-axis using similar option in line chart graph but using outlier graph it cant not hide the y-axis. Is it any method to disable this y-axis.  <panel> <viz type="Splunk_ML_Toolkit.OutliersViz"> <title>abc</title> <search> <query>index="abc" data_type="real_data" label=abc=$os$ | lookup outlier_value.csv label, operating_system OUTPUTNEW upper_bound lower_bound | eval rndoff_avg_rt = round(avg_rt,2) | rename rndoff_avg_rt as t | table _time t lower_bound upper_bound</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <option name="drilldown">none</option> <option name="charting.axisLabelsY.majorLabelVisibility">hide</option> <option name="height">500</option> <option name="refresh.display">progressbar</option> </viz> </panel>   Thanks
I am trying to merge Splunk search query with a database query result set. Basically I have a Splunk dbxquery 1 which returns userid and email from database as follows for a particualr user id:   |... See more...
I am trying to merge Splunk search query with a database query result set. Basically I have a Splunk dbxquery 1 which returns userid and email from database as follows for a particualr user id:   | dbxquery connection="CMDB009" query="SELECT dra.value, z.email FROM DRES_PRINTABLE z, DRES.CREDENTIAL bc, DRES.CRATTR dra WHERE z.userid = bc.drid AND z.drid = dra.dredid AND dra.value in ('xy67383') "   Above query outputs VALUE EMAIL xv67383 xyz@test.com   Another query is a Splunk query 2 that provides the user ids as follows:   index=index1 (host=xyz OR host=ABC) earliest=-20m@m | rex field=_raw "samlToken\=(?>user>.+?):" | join type=outer usetime=true earlier=true username,host,user [search index=index1 source="/logs/occurences.log" SERVER_SERVER_CONNECT NOT AMP earliest=@w0 | rex field=_raw "Origusername\((?>username>.+?)\)" | rex field=username"^(?<user>,+?)\:" | rename _time as epoch1] | "stats count by user | sort -count | table user   This above query 2 returns a column called user but not email.   What I want to do is add a column called email from splunk dbxquery 1 for all matching rows by userid in output of query 1. Basically want to add email as additional field for each user returned in query 2.   What I tried so far is this but it does not give me any results. Any help would be appreciated.   index=index1 (host=xyz OR host=ABC) earliest=-20m@m | rex field=_raw "samlToken\=(?>user>.+?):" | join type=outer usetime=true earlier=true username,host,user [search index=index1 source="/logs/occurences.log" SERVER_SERVER_CONNECT NOT AMP earliest=@w0 | rex field=_raw "Origusername\((?>username>.+?)\)" | rex field=username"^(?<user>,+?)\:" | rename _time as epoch1] | "stats count by user | sort -count | table user | map search="| | dbxquery connection=\"CMDB009\" query=\"SELECT dra.value, z.email FROM DRES_PRINTABLE z, DRES.CREDENTIAL bc, DRES.CRATTR dra WHERE z.userid = bc.drid AND z.drid = dra.dredid AND dra.value in ('$user'):\""   Thanks,      
Hi, This add-on is to ingest MCAS logs into splunk? Or do we need to use syslog collectors to ingest the MCAS logs? and this add-on is to only ingest incidents and alserts?  
Hi all, Currently I receive the logs from the fortinet source without problems, but what worries me is that there is a large consumption of licensing. When I make a query I find that 94% of the log... See more...
Hi all, Currently I receive the logs from the fortinet source without problems, but what worries me is that there is a large consumption of licensing. When I make a query I find that 94% of the logs that arrive are type "Notice" Would it be a bad idea to ask the administrator not to send this type of log? In case of blocking this type of log, what important information would you miss?    
Hi, i have two panels in a Dashboard but i need to display only one but when i click on first panel then the second panel should appear, please help me to get this complete.   below is my code... See more...
Hi, i have two panels in a Dashboard but i need to display only one but when i click on first panel then the second panel should appear, please help me to get this complete.   below is my code,   <dashboard version="1.1"> <label>VLS_Customer Dashboard Clone</label> <row> <panel> <chart> <search> <query>|inputlookup Samplebanking1.csv |stats count by "DEPARTMENT CODE"</query> <earliest>0</earliest> <latest></latest> </search> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel> <chart> <search> <query>|inputlookup Samplebanking1.csv |stats count by STATUS</query> <earliest>-30m@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">minmax</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">right</option> <option name="charting.lineWidth">2</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row> </dashboard>  
Hi, Splunkers, I have a splunk search table as below, I want to add a duration column for each record, using its timestamp subtract the timestamp of the previous record,   for example, here for rec... See more...
Hi, Splunkers, I have a splunk search table as below, I want to add a duration column for each record, using its timestamp subtract the timestamp of the previous record,   for example, here for record 2, duration should be 2021-12-14 12:55:25.258  - 2021-12-14 12:55:03.339 _time                                               columna 2021-12-14 12:55:03.339   abc 2021-12-14 12:55:25.258   efg 2021-12-14 12:55:25.336   hij any help would be appreciated. Kevin
I'm trying to set a new dashboard token on click of a country in a choropleth that would populate with the iso2 value of that country and feed other panels, but the only option in dashboard studio I ... See more...
I'm trying to set a new dashboard token on click of a country in a choropleth that would populate with the iso2 value of that country and feed other panels, but the only option in dashboard studio I have is for a custom url.  Is there a way to accomplish what I'm trying to do using Dashboard studio?  
Our Co. decided to remove ITSI a few months ago & am learning that it has dependent Apps that it comes with that I need to search & remove. Does any champs here know the remnants & related Apps that ... See more...
Our Co. decided to remove ITSI a few months ago & am learning that it has dependent Apps that it comes with that I need to search & remove. Does any champs here know the remnants & related Apps that are installed on Splunk servers after ITSI is installed? Please share list & how to search & remove them. Thx a million.
I have a job that we run on demand that creates a new log for the job. it's formatted Name.YYYYMMDDhhmmss.log each line in the log has a timestamp MM/DD/YYYY hh:mm:ss The last line of the log wou... See more...
I have a job that we run on demand that creates a new log for the job. it's formatted Name.YYYYMMDDhhmmss.log each line in the log has a timestamp MM/DD/YYYY hh:mm:ss The last line of the log would say "EXIT STATUS = 0" Sometimes though, the job can have a problem and stops logging and sending to splunk. Is there any way to throw an alert when the job stop running and say an hour passes without any new logging from the latest Name.YYYYMMDDhhmmss.log file? Thanks
Good Afternoon,     I am having an issue with the ThreatConnect TA. The API appears to be connecting as expected but no logs are in the index. I observed within splunkd.log the log sample found belo... See more...
Good Afternoon,     I am having an issue with the ThreatConnect TA. The API appears to be connecting as expected but no logs are in the index. I observed within splunkd.log the log sample found below. Looking at the props.conf, it appears it is configured correctly. has anyone had this issue? it appears the logs are in epoch time. 12-15-2021 20:36:44.013 +0000 WARN DateParserVerbose [38565 merging_1] - The TIME_FORMAT specified is matching timestamps (INVALID_TIME (1639600603907184)) outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source=tc_download_indicators.py|host=127.0.0.1|threatconnect-app-logs|10128188 PROPS.CONF config [threatconnect-app-logs] INDEXED_EXTRACTIONS = json NO_BINARY_CHECK = true TIMESTAMP_FIELDS = timestamp TIME_FORMAT = %s.%6N category = Application description = ThreatConnect App Logs pulldown_type = 1 [threatconnect-event-data] INDEXED_EXTRACTIONS = json NO_BINARY_CHECK = true TIMESTAMP_FIELDS = timestamp TIME_FORMAT = %s.%6N category = Application description = ThreatConnect Matched Event Data pulldown_type = 1 [source::...tc_ar_send_to_playbook.log*] sourcetype = send_event_to_threatconnect_playbook:*    
I have many large jobs that take for ever to run sometimes 18-30 hours. Eventually error out. How to make a list. I have already tried below posted related to this subject but it does not list the la... See more...
I have many large jobs that take for ever to run sometimes 18-30 hours. Eventually error out. How to make a list. I have already tried below posted related to this subject but it does not list the large (bad boys) large jobs: https://community.splunk.com/t5/Splunk-Search/What-causes-delayed-searches-alerts-in-Splunk-Enterprise-Error/m-p/545405 & Searches are delayed when there are no resources available at run-time and they have a non-zero Schedule Window.  The delay lasts until the schedule window closes.  If, at that time, the search still can't run then it becomes "skipped". To resolve it, re-schedule the searches so fewer are scheduled at the same time.  Pay particular attention to the :00, :15, :30, and :45 minutes of each hour.  See https://github.com/dpaper-splunk/public/blob/master/dashboards/extended_search_reporting.xml for a helpful dashboard. Just copy paste it to your node where you have those delayed searches as a dashboard. Another option is use MC's Search -> Scheduler and look there what those searches are. Anyhow you should look that time by time or create alert to inform you if there are lot of skipped or delayed searches.