All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I would like to know which applications are ingesting more data and violating the license.  I tried the below query but I am not sure if it gives correct results. index=_internal source... See more...
Hi All, I would like to know which applications are ingesting more data and violating the license.  I tried the below query but I am not sure if it gives correct results. index=_internal source=*license_usage.log type=”Usage” splunk_server=* | eval Date=strftime(_time, “%Y/%m/%d”) | streamstats sum(b) as volume | eval MB=round(volume/1024/1024,5) | timechart span=1w avg(MB) by idx    index=_internal source=*license_usage.log type=Usage | stats sum(b) as bytes by h | eval MB = round(bytes/1024/1024,1) | fields h MB | rename h as host          
hello I have a windows client and a Splunk Enterprise in other windows and connect them with mikrotik in Gns3. I want to send my browsers history to splunk and see them. how do it? my browser... See more...
hello I have a windows client and a Splunk Enterprise in other windows and connect them with mikrotik in Gns3. I want to send my browsers history to splunk and see them. how do it? my browser is google chrome. i do it in Mozilla with add monitor profile directory. thanks
Can we populate the raw events from one index to summary index. If yes how can I do that can you please help me 
I just installed the Knowledge Object overview App for Splunk (SplunkWorks - Contributor: Jason New) and it seems it's missing macro's and most panels don't update.   Any suggestions on contacts for ... See more...
I just installed the Knowledge Object overview App for Splunk (SplunkWorks - Contributor: Jason New) and it seems it's missing macro's and most panels don't update.   Any suggestions on contacts for support?
Hello  I am trying collect data from powermax Array with Dell EMC Add-on,  during my test on dev environment (standalone mode) everything was working perfectly. When I turn on my Prod environment... See more...
Hello  I am trying collect data from powermax Array with Dell EMC Add-on,  during my test on dev environment (standalone mode) everything was working perfectly. When I turn on my Prod environment, any data is coming, It seems like my heavy forwarder receive data but doesn't send them to indexers. I don't see any error in my ta_dellemc_vmax_inputs.log file. Here are some informations about my environment: My dev environment in which everything working well : 1 search head with Linux RHEL7.9 and Splunk 8.2.3 My Prod environment :  Heavy Forwarder :  Linux 3.10.0 RHEL7.9 and Splunk 8.1.6 Indexers : Linux 3.10.0  RHEL7.9 and Splunk 8.2.3 search heads :  Linux 3.10.0 and Splunk 8.2.3 Some logs from ta_dellemc_vmax_inputs.log 2022-02-07 19:01:55,649 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Passed performance timestamp recency check: 1644258000000. 2022-02-07 19:01:55,650 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Starting metrics collection run. 2022-02-07 19:01:56,003 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Array collection complete. 2022-02-07 19:01:56,085 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | SRP collection complete. 2022-02-07 19:01:59,258 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Storage Group collection complete. 2022-02-07 19:02:00,386 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Director collection complete. 2022-02-07 19:02:00,387 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Finished collection run. 2022-02-07 19:02:00,388 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Completed metrics collection run in 6 seconds. Please could you help me to resolve this issue or give some advice about configurations ?  Thanks.
Is it possible to prevent a system admin adding inputs at a forwarder? I only want sanctioned inputs to be used i.e. I want our Splunk admins to approve all forwarders and inputs. I thought deploymen... See more...
Is it possible to prevent a system admin adding inputs at a forwarder? I only want sanctioned inputs to be used i.e. I want our Splunk admins to approve all forwarders and inputs. I thought deployment server might solve this but it does not cover all local inputs per say. 
I need to add a line that show the number of results I get in the timeframe of the dashboard search using the job.earliestTime and job.latestTime tokens but if I use the replace function it doesn't w... See more...
I need to add a line that show the number of results I get in the timeframe of the dashboard search using the job.earliestTime and job.latestTime tokens but if I use the replace function it doesn't work. This is the code of the dashboard I used:     <html> <div class="custom-result-value">Results: $result$ in the time frame ($stime$ a $ltime$)</div> </html> <table id="test_table"> <search> <query>| metadata type=sources | eval lastTime=strftime(lastTime, "%Y-%m-%d %H:%M:%S.%Q"), firstTime=strftime(firstTime, "%Y-%m-%d %H:%M:%S.%Q"), recentTime=strftime(recentTime, "%Y-%m-%d %H:%M:%S.%Q")</query> <earliest>-365d@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> <progress> <eval token="result">tonumber('job.resultCount')</eval> <eval token="ltime">tostring('job.latestTime')</eval> <eval token="stime">tostring('job.earliestTime')</eval> <eval token="stime">replace('$stime$',"\+\d+:00","")</eval> <eval token="stime">replace('$ltime$',"\+\d+:00","")</eval> </progress> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table>     I get this regardless or not i use the replace command The replace command works if used in a regular search  
We encountered an error after we upgraded to a new version of Splunk. This Splunk instance is under a distributed environment and this is one of the indexers within a cluster. Please see the logs bel... See more...
We encountered an error after we upgraded to a new version of Splunk. This Splunk instance is under a distributed environment and this is one of the indexers within a cluster. Please see the logs below after we run ./splunk status :       Exception: <class 'PermissionError'>, Value: [Errno 13] Permission denied: '/opt/splunk/etc/system/local /migration.conf'Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1359, in <module> sys.exit(main(sys.argv)) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1212, in main parseAndRun(argsList) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1067, in parseAndRun retVal = cList.getCmd(command, subCmd).call(argList, fromCLI = True) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 293, in call return self.func(args, fromCLI) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/control_api.py", line 35, in wrapperFunc return func(dictCopy, fromCLI) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/_internal.py", line 189, in firstTimeRun migration.autoMigrate(args[ARG_LOGFILE], isDryRun) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/migration.py", line 3166, in autoMigrate checkTimezones(CONF_PROPS, dryRun) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/migration.py", line 411, in checkTimezones migSettings = comm.readConfFile(PATH_MIGRATION_CONF) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli_common.py", line 172, in readConfFile f = open(path, 'rb') PermissionError: [Errno 13] Permission denied: '/opt/splunk/etc/system/local/migration.conf'Please file a case online at http://www.splunk.com/page/submit_issue       We also tried the "chown" command but still no luck.    
I have the below error showing on the search head, I've been looking for a cause of this error with no luck.   Unable to initialize modular input "itsi_suite_enforcer" defined in the app "SA-ITOA... See more...
I have the below error showing on the search head, I've been looking for a cause of this error with no luck.   Unable to initialize modular input "itsi_suite_enforcer" defined in the app "SA-ITOA": Introspecting scheme=itsi_suite_enforcer: script running failed (exited with code 1)   Did anyone encounter any similar error ever? 
Hi Team,   Im looking to Integrate Splunk to tableau and able to do it successfully till Tableau Desktop but when I tried to publish the dashboard im getting error  [unixODBC][Driver Manager]Can'... See more...
Hi Team,   Im looking to Integrate Splunk to tableau and able to do it successfully till Tableau Desktop but when I tried to publish the dashboard im getting error  [unixODBC][Driver Manager]Can't open lib 'Splunk ODBC Driver' : file not found Generic ODBC requires additional configuration. The driver and DSN (data source name) must be installed and configured to match the connection. Unable to connect to the server "Splunk ODBC Driver". Check that the server is running and that you have access privileges to the requested database. I reached out to the Tableau admin team and they are saying there are no supported ODBC drivers for Linux and this cant be done. anyone in this group has successfully integrated Splunk with tableau (All Linux) , let me know the process to overcome this error. Thanks    
We use Splunk Enterprise and would like to know if there a way if we can disable email alerts for multiple Splunk alerts. I dont want to manually disable each alert  during that window. Is there a c... See more...
We use Splunk Enterprise and would like to know if there a way if we can disable email alerts for multiple Splunk alerts. I dont want to manually disable each alert  during that window. Is there a curl command that I can run so that multiple alerts are disabled? Can I feed all the alerts in a .csv and a command which will pull the alert names and disable them all at once? @titleistfour ?  Referring to your thread: https://community.splunk.com/t5/Alerting/Is-there-an-easy-way-to-use-the-REST-API-to-disable-Splunk/m-p/183961#M3085   https://stackoverflow.com/questions/51799979/splunk-disabling-alerts-during-maintenance-window 
Binning/timecharting seems quite straightforward regarding time... unless you want to span day+ ranges. From experience I might say that if you bin or timechart with span of a day or more, the valu... See more...
Binning/timecharting seems quite straightforward regarding time... unless you want to span day+ ranges. From experience I might say that if you bin or timechart with span of a day or more, the value of _time gets snapped to midnight in user's timezone. That's what experience shows. But the question is (because I can't find any) is there an official Splunk documentation stating that this is the designed behaviour?
I have data as follows:   time=1 msgid=1 event=new_msg time=2 msgid=1 delivery=1 event=start_delivery time=3 delivery=1 event=deferred_delivery time=4 msgid=1 delivery=2 event=start_delivery time=5... See more...
I have data as follows:   time=1 msgid=1 event=new_msg time=2 msgid=1 delivery=1 event=start_delivery time=3 delivery=1 event=deferred_delivery time=4 msgid=1 delivery=2 event=start_delivery time=5 delivery=2 event=successful_delivery time=6 msgid=1 event=end_msg   What I would like to achieve is to group events together from "new_msg" to "end_msg", including all "*_delivery" events. I have tried to use    ... | transaction msgid delivery startswith="new_msg" endswith="end_msg"   The problem is that I never get all the events together in one transaction, but mostly the events from time=1,2,3. I also did some experiments with the "keepevicted", "keeporphans" and "connected" transaction parameters. Sometimes I also get the "final" events from time=4,5,6 as a separate transaction. What never worked out is to have a single transaction for all of those events. Note that there may be more than just two delivery attempts than in the example. My assumption is that "transaction" is unable to follow changing values in one of the provided fields, as it is the case with "delivery". I'd appreciate any help – thank you!
Hi All, I want to show sum of field by year(2019, 2020, 2021) i am using query: |inputlookup abc.csv | eval _time=strptime('date1',"%Y-%m-%d")| eval year= strftime(_time,"%Y")  | chart sum(com) as... See more...
Hi All, I want to show sum of field by year(2019, 2020, 2021) i am using query: |inputlookup abc.csv | eval _time=strptime('date1',"%Y-%m-%d")| eval year= strftime(_time,"%Y")  | chart sum(com) as com by field1, year| addcoltotals o/p: field1 com 2019 2020 2021  for this total for 2020 is correct but facing issue for 2019 & 2021 please help me to get correct solution for this. Thank, ND
Hi, I have below panel on dashboard, however, bar chart is not displaying the colors as per range specified.     <row> <panel> <title>Usage Prediction (Month To Date)</title> ... See more...
Hi, I have below panel on dashboard, however, bar chart is not displaying the colors as per range specified.     <row> <panel> <title>Usage Prediction (Month To Date)</title> <chart> <search> <query>index="license_summary"| table _time, Used, Quota| eval Consumption=round(Used , 2) |timechart span=1d latest(Consumption) as lConsumption | predict lConsumption period=7 | rangemap field=lConsumption green=0-50 yellow=50-100 blue=100-125 red=125-500</query> <earliest>@mon</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.fieldColors">{"green": 0x00FF00, "yellow": 0xFFFF00, "blue":0x0000FF, "red":0xFF0000}</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisLabelsY.majorUnit">25</option> <option name="charting.axisLabelsY2.majorUnit">25</option> <option name="charting.axisTitleX.text">Time</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.text">License Usage (GB)</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.text">Prediction</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.maximumNumber">155</option> <option name="charting.axisY.minimumNumber">0</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">1</option> <option name="charting.axisY2.maximumNumber">155</option> <option name="charting.axisY2.minimumNumber">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.overlayFields">Prediction</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">none</option> <option name="charting.lineWidth">2</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row>     Can you suggest a fix.
Hi, I'm trying to use Splunk to monitor exception logs, Splunk will send me an email if there is an exception. I try to set Throttle for 6 hours to avoid getting too many emails. Most of the times ... See more...
Hi, I'm trying to use Splunk to monitor exception logs, Splunk will send me an email if there is an exception. I try to set Throttle for 6 hours to avoid getting too many emails. Most of the times 6 hours is fine but sometimes it’s  too long for us, I have to wait 6 hours for the alert come back.  Are there any options can turn the Alert back? Thanks  
Hi,   using the below query to trigger an alert. | tstats count WHERE index=your_index AND(TMPFIELD="FIELD1" OR TMPFIELD="FIELD2" OR TMPFIELD="FIELD3") GROUPBY index TMPFIELD _time latest=-1h@... See more...
Hi,   using the below query to trigger an alert. | tstats count WHERE index=your_index AND(TMPFIELD="FIELD1" OR TMPFIELD="FIELD2" OR TMPFIELD="FIELD3") GROUPBY index TMPFIELD _time latest=-1h@h earliest=@h | timechart count(eval(FIELD1)) AS FIELD1 count(eval(FIELD2)) AS FIELD2 count(eval(FIELD3)) AS FIELD3 | append [ index=_internal latest=-1h@h earliest=@h | head 1 | eval FIELD1=0, FIELD2=0, FIELD3=0| fields _time FIELD1 FIELD2 FIELD3 ] | stats sum(FIELD1) AS FIELD1 sum(FIELD2) AS FIELD2 sum(FIELD3) AS FIELD3 BY _time | where FIELD1=0 OR FIELD2=0 OR FIELD3=0   But problem is, it's giving zero in the table if data is present in the field also EXAMPLE, FIELD1    FIELD2   FIELD3 0                0               0 But in reality, field3 has values FIELD1    FIELD2   FIELD3 0                0               59 so it should through alert as well, because FIELD1 & FIELD2 are Zero. @gcusello 
Hi, I'm planning a new splunk architecture and was thinking about placing the syslog-ng on the same virtual machine as the Heavy Forwarder to read the files locally. How will a large data volum... See more...
Hi, I'm planning a new splunk architecture and was thinking about placing the syslog-ng on the same virtual machine as the Heavy Forwarder to read the files locally. How will a large data volume impact the performance or stability? What do i need to consider for memory and diskspace if i combine? When is this advised to seperate to a dedicated syslog-ng server? Will a dedicated syslog-ng server allow for more syslog traffic? Would it be beneficial to install a Universal Forwarder on the HF for local file reading? Is it more advised for better data buffering? Thank you, Jay
Hi All, For Windows Service Monitoring extension by AppD. Few of the services metrics are fluctuating every couple of minutes, even though services are running fine on the server.  monitor.xml... See more...
Hi All, For Windows Service Monitoring extension by AppD. Few of the services metrics are fluctuating every couple of minutes, even though services are running fine on the server.  monitor.xml monitor.xml is attached. Please help us if you have any suggestions to solve this issue. Thanks & Regards, Maktumhusen
Hi, all! Here's my log file: - the pattern: raw call progress sequence is: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX - the length of the value of the raw call progress sequence might differ from each other... See more...
Hi, all! Here's my log file: - the pattern: raw call progress sequence is: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX - the length of the value of the raw call progress sequence might differ from each other My request is how could I extract the highlighted part as a new filed!!! 2022-02-07 16:27:49,423|tOX-u3JFAq6EmU3FXYy-Td2|DEBUG|com.hsbc.hvf.mi.MIAPI|endCallMI()|MI insertion started... 2022-02-07 16:27:49,423|tOX-u3JFAq6EmU3FXYy-Td2|DEBUG|com.hsbc.hvf.mi.MIAPI|endCallMI()|raw call progress sequence is:31381113209410021947204792292008771577067705W019W021W023W02099529959  raw call progress sequence is:31381116209410122047922920012099215396732101210296887903763575957598W016E194Q098U165W023A024995299563173 raw call progress sequence is:313811112094100231941577 raw call progress sequence is:313811162094100219472047922920012099215396732101210296889296961877197902790876367637W016E191Q064U086W023A70299529956653765386604W016CS00E191Q064U086W023A7029952995665376538