All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to compute the change in temperature for each location in a given interval, say, 15 minutes, or 30 minutes. I figure that streamstats might capture the temperature value at the beginning of ... See more...
I want to compute the change in temperature for each location in a given interval, say, 15 minutes, or 30 minutes. I figure that streamstats might capture the temperature value at the beginning of such time interval, using time_window to specify the interval length. But, however, the following example surprises me. The temperature readings for Pleasonton are collected every 15 minutes, thus the following query: | makeresults | eval _raw="time_ Location Temperature 2021-08-23T03:04:05.000-0700 Pleasonton 185 2021-08-23T03:04:20.000-0700 Pleasonton 86 2021-08-23T03:04:35.000-0700 Pleasonton 87 2021-08-23T03:04:50.000-0700 Pleasonton 89" | multikv forceheader=1 | eval _time=strptime(time_,"%Y-%m-%dT%H:%M:%S.%3N%z") | fields _time Location Temperature | sort _time | streamstats earliest(Temperature) as previous_temp earliest(_time) as previous_time by Location time_window=5m | convert ctime(previous_time) I’d expect the following, as with the interval 5 minutes from an event, there is no other event, but the current one. _time Location Temperature _raw previous_temp previous_time 2021-08-23 03:04:05 Pleasonton 185 2021-08-23T03:04:05.000-0700 Pleasonton 185 185 08/23/2021 03:04:05.000000 2021-08-23 03:04:20 Pleasonton 86 2021-08-23T03:04:20.000-0700 Pleasonton 86 86 08/23/2021 03:04:20.000000 2021-08-23 03:04:35 Pleasonton 87 2021-08-23T03:04:35.000-0700 Pleasonton 87 87 08/23/2021 03:04:35.000000 2021-08-23 03:04:50 Pleasonton 89 2021-08-23T03:04:50.000-0700 Pleasonton 89 89 08/23/2021 03:04:50.000000 but this is actually what I get: _time Location Temperature _raw previous_temp previous_time 2021-08-23 03:04:05 Pleasonton 185 2021-08-23T03:04:05.000-0700 Pleasonton 185 185 08/23/2021 03:04:05.000000 2021-08-23 03:04:20 Pleasonton 86 2021-08-23T03:04:20.000-0700 Pleasonton 86 185 08/23/2021 03:04:05.000000 2021-08-23 03:04:35 Pleasonton 87 2021-08-23T03:04:35.000-0700 Pleasonton 87 185 08/23/2021 03:04:05.000000 2021-08-23 03:04:50 Pleasonton 89 2021-08-23T03:04:50.000-0700 Pleasonton 89 185 08/23/2021 03:04:05.000000 All taking the earliest event's temperature, which is beyond 5 minutes from any subsequent events.How can I query to get the temperature at the beginning of the time period?  
 please can any one help me it's in kb or in mb ? Thanks in advance
When trying to install this app on 8.2.0 (or 8.2.1) I get the following error:     There was an error processing the upload.Invalid app contents: archive contains more than one immediate subdirect... See more...
When trying to install this app on 8.2.0 (or 8.2.1) I get the following error:     There was an error processing the upload.Invalid app contents: archive contains more than one immediate subdirectory: and DA-ITSI-CP-microsoft-exchange     This happens when uploading the downloaded .spl file or installing it automatically from the integrated installer.   
Hi, is there a splunk equivalent to the Linux "numfmt" command? I have a dashboard showing disk usage, captured in bytes. And I want to display the number in "human readable format", some disks wil... See more...
Hi, is there a splunk equivalent to the Linux "numfmt" command? I have a dashboard showing disk usage, captured in bytes. And I want to display the number in "human readable format", some disks will be Mb, Gb, some will be Tb. I'd prefer to not simply divide by 1024.  I'm guessing there's a clever number formatter in splunk like there is in Linux?       $ numfmt --to=si 122345 123K $ numfmt --to=si 122345678 123M $ numfmt --to=si 1223456789012 1.3T       Cheers, Andrew
I have a redhat 7.4 syslog-ng server with splunk heavy forwarder(8.1.2)  installed. server is TZ EST Server collects udp/514 logs from multiple networking devices and writes them to textfiles like .... See more...
I have a redhat 7.4 syslog-ng server with splunk heavy forwarder(8.1.2)  installed. server is TZ EST Server collects udp/514 logs from multiple networking devices and writes them to textfiles like ... /syslogs/todays-internetfirewalls.txt /syslogs/todays-routers.txt /syslogs/todays-switches.txt splunk Heavy Forwarder has data/file monitor inputs for the various text files and are assigned to the appropriate index with the appropriate sourcetype so some network devices sending udp/514 syslogs to the above server are in different timezones but the entries in the text file written do not adjust for timezones... example screen attached - In screenshot IP 172.24.63.88 is GMT and 172.24.3.5 is EST I researched and tried to create an app called Timezones on the HF with a local/props.conf file that just lists... [host::172.24.63.88] TZ = GMT but when file data is ingested the _time for the IP in GMT is same as it appears in the log file entry with no adjustment to bring GMT time to EST time?? any help would be appreciated - I have read several links already and follow a few answers... https://community.splunk.com/t5/Dashboards-Visualizations/Multiple-Timezones-search-worldwide/td-p/91339 https://community.splunk.com/t5/Getting-Data-In/Multiple-time-zones-in-props-conf/m-p/286456#M54667 https://docs.splunk.com/Documentation/Splunk/latest/admin/propsconf Rich
Hello, Does anyone know how to pass parameters to a saved search using the splunklib for the Splunk API? I am able to use it to get results from my saved searches, but now I would like to be able t... See more...
Hello, Does anyone know how to pass parameters to a saved search using the splunklib for the Splunk API? I am able to use it to get results from my saved searches, but now I would like to be able to pass a variable value to my saved search. I've seen a few examples of people using the "curl" approach, but I wanted to see if there was a way to do this by directly using the splunklib for Python. This is the snippet of code where I retrieve my saved search and then run it. number_of_users = 10 search_name = "Find Most Recent Users" mysavedsearch = service.saved_searches[search_name] job = mysavedsearch.dispatch()   So if I have a saved search named: Find Most Recent Users And that search looks like:  "index=INDEX host=HOST sourcetype=SOURCETYPE | rex field=_raw "User:(?<user_id>\d+) | where isnotnull(user_id) | head $number_of_users$"   How would I pass the variable "number_of_users" into the above?
I have a link on one dashboard    <html> <a href="....._dashboard?form.dcCell=$dcCell$&amp;form.rhel=$rhel$&amp;form.field1.earliest=$field1.earliest$&amp;form.field1.latest=$field1.lates... See more...
I have a link on one dashboard    <html> <a href="....._dashboard?form.dcCell=$dcCell$&amp;form.rhel=$rhel$&amp;form.field1.earliest=$field1.earliest$&amp;form.field1.latest=$field1.latest$">PRD Dashboard</a> </html> The link is filling out correctly.  When I click it and go to the target dashboard, the inputs are being set correctly, but the tokens are not being set at all. Target dashboard <fieldset submitButton="false" autoRun="true"> <input type="dropdown" token="rhel" searchWhenChanged="true"> <label>RHEL Server</label> <choice value="rhel6">RHEL 6</choice> <choice value="rhel8">RHEL 8</choice> <change> <condition match="match($rhel$,&quot;rhel6&quot;) AND match($dcCell$, &quot;allE&quot;)"> <set token="use_jcs">True</set> <set token="use_hikari">True</set> </condition> <condition match="match($rhel$,&quot;rhel8&quot;) AND match($dcCell$, &quot;allE&quot;)"> <set token="use_hikari">True</set> <unset token="use_jcs"></unset>   When using the drop downs, everything works fine.  It's only when i try to put the parameters in the url, does it break.  I have even tried using the drop downs and the copying the url it generates on the browser and going directly to that.   It still does not set the tokens.  What am I missing?
    { "miners":[ { "address":"7338594461977886954", "addressRS":"S-GJ9C-T2EF-C82A-8EZPD", "pendingBalance":"0 SIGNA", "totalCapacity":80.04373488882364, ... See more...
    { "miners":[ { "address":"7338594461977886954", "addressRS":"S-GJ9C-T2EF-C82A-8EZPD", "pendingBalance":"0 SIGNA", "totalCapacity":80.04373488882364, "totalEffectiveCapacity":64.44006329147014, "commitment":"2467.564 SIGNA", "committedBalance":"159010 SIGNA", "boost":0.4127924541887127, "boostPool":0.6147618784432624, "sharedCapacity":64.44006329147014, "sharePercent":100, "donationPercent":1, "nConf":120, "share":0.4091681714251888, "minimumPayout":"10 SIGNA", "currentRoundBestDeadline":"511", "name":"Pir8Radio", "userAgent":"signum-miner/1.8.0" }, { "address":"632........... REPEATS FOR EACH USER       So my json output looks like the above, but there are anywhere from 1-1000 users.. I'm not quite sure how to break up each user and have each of the stats per user?    Any help would be greatly appreciated guys..  thanks!!
I want to extract count from list (picture 1 ) to table (picture 2) How do  I  do?       |stats count by Name,severity |sort - count | stats list(severity) as severity, list(count) as count, su... See more...
I want to extract count from list (picture 1 ) to table (picture 2) How do  I  do?       |stats count by Name,severity |sort - count | stats list(severity) as severity, list(count) as count, sum(count) as Total by Name | sort -Total     The field severity is follow by name the count for each level inside severity  
I have 4 applications. All of them generate events like RECEIVED, DELIVERED and DISCARDED.  In my dashboard, i want to have panel which shows : sum of event count of RECEIVED from cs-app1 , cs-app... See more...
I have 4 applications. All of them generate events like RECEIVED, DELIVERED and DISCARDED.  In my dashboard, i want to have panel which shows : sum of event count of RECEIVED from cs-app1 , cs-app2 and  cs-app3. sum of event count of DELIVERED from cs-app4 and sum of all DISCARDED from all of the app. Currently these 3 are displayed as 3 different timecharts panels.  I would like to combine them as one single timechart which would reduce the clutter on the dashboard.. Is that possible.? If so how should i frame the query which is efficient. Event example { "log_processed" : { "message" : { "app_name" : "cs-app1", "logEvent" : "RECEIVED" } } } RECEIVED events index=dockerlogs | search log_processed.app_name IN ("cs-app1", "cs-app2","cs-app3") | spath input=log_processed.message output=logEvent path=logEvent | search logEvent = "RECEIVED" | timechart span=1d count(logEvent) by logEvent DELIVERED events index=dockerlogs kubernetes.namespace_name=default | search log_processed.app_name IN ("cs-app4") | spath input=log_processed.message output=logEvent path=logEvent | search logEvent = "DELIVERED" | timechart span=1d count(logEvent) by logEvent DISCARDED events index=dockerlogs kubernetes.namespace_name=default | search log_processed.app_name=* | spath input=log_processed.message output=logEvent path=logEvent | search logEvent = "DISCARDED" | timechart span=1d count(logEvent) by logEvent
Hello, I have a directory structure which i want split up in separate events. For example \MAIN\SUB1\SUB2\SUB3\file.xlsx This should be created as \MAIN \MAIN\SUB1 \MAIN\SUB1\SUB2\ \MAIN\SU... See more...
Hello, I have a directory structure which i want split up in separate events. For example \MAIN\SUB1\SUB2\SUB3\file.xlsx This should be created as \MAIN \MAIN\SUB1 \MAIN\SUB1\SUB2\ \MAIN\SUB1\SUB2\SUB3\ Of course the number of subdirectories can vary, from 1 to many. I know i cannot use a for loop command, so i am searching for a way to handle my challenge. How should hanlde this, and is this possible? Any help is apprecated. Regards, Harry
HI,  How Splunk communicate with other systems ? e.g any ticketing tool or cloud based system? I have gone through below link which is useful . It mentions how Splunk communicate with other system ... See more...
HI,  How Splunk communicate with other systems ? e.g any ticketing tool or cloud based system? I have gone through below link which is useful . It mentions how Splunk communicate with other system but how can other system communicate with splunk? is there any interface  splunk provides? Forward data to third-party systems - Splunk Documentation Best Regards, Tushar
Hi, I have a CSV file containing events, like meta-data of user visiting a URL, that I import. The challenge I face is getting Splunk to use one of the fields in the data, event_time (shown in the sc... See more...
Hi, I have a CSV file containing events, like meta-data of user visiting a URL, that I import. The challenge I face is getting Splunk to use one of the fields in the data, event_time (shown in the screen shot, last line on the bottom), as the actual event time shown in the default display Time column. If I knew what I was doing this is probably super easy. I keep importing the same file and trying different timestamp methods when defining a new sourceType during the import. There is probably a simple way to do this using the sourceType fields on import or the props.conf, even without having to keep importing it? I have read user guide Modify Event Processing and Assign Source Types to Data, but hours later...here I am.  Thanks, Shane the field event_time is what I would like to be in the TIme column  
Hi everyone. I'm using Splunk Enterprise (Trial) to understand how things works. I'm trying to configure some sourcetype for my python/flask application, logs where getting merged incorrectly, w... See more...
Hi everyone. I'm using Splunk Enterprise (Trial) to understand how things works. I'm trying to configure some sourcetype for my python/flask application, logs where getting merged incorrectly, with two or more line logs being joined inside a single event and sourcetype is not being applied. For example, this is a single event in splunk: [2021-06-23 12:05:09,807] {/program.py:make_request:452} DEBUG - https://localhost:443 "POST /create/user HTTP/1.1" 201 None [2021-06-23 12:05:09,810] {/program.py:make_request:493} INFO - user created with success id=1234 I also have some logs with this format: [2021-06-24 17:48:37,490] {/program/main.py:authorize:69} INFO - Host: localhost:5000 User-Agent: curl/7.64.0 Accept: */* I tried creating a new sourcetype on Settings->Data->Source Types. But I noticed two weird things. 1 - If I go on Advanced and configure as I want, It don't save my new regex for LINE_BREAKER. I need it to be: ([\n\r]+)[ but every time I press save, and open again, its the default one ([\n\r]+). If I go on "Events Break" instead and just type my regex it saves. What I'm doing wrong? 2 - It doesn't apply my new sourcetype to my logs. I check on Search->Event lists and my logs are being sourcetyped as output-too_small, now I changed something and it is output-2 Then, I started googling around, and reading some docs, they tell to edit some files on splunk server then I did: 3 - Also tried creating a new sourcetype on $SPUNK_HOME/etc/system/local/props.conf as follow: [python_flask] SHOULD_LINEMERGE = False LINE_BREAKER = ([\n\r]+)\[ NO_BINARY_CHECK = true disabled = false 4 - Also changed my $SPUNK_HOME/etc/system/local/inputs.conf, and added: [monitor:///var/log/program/output.log] sourcetype = python_flask I restarted with splunk restart both server and universal forwarder, and the only thing that changed is that it started to put sourcetype=output-2 on my events . I'm quite noob in splunk management, so sorry if any question is dumb, I have already checked the docs, google and so on. Thank you so much in advance.
Has anyone used marginal histograms in their Splunk dashboards?  It's similar to piping to addtotals and/or addcoltotals, except instead of a number, you'd display a bar/column.  It's essentially a m... See more...
Has anyone used marginal histograms in their Splunk dashboards?  It's similar to piping to addtotals and/or addcoltotals, except instead of a number, you'd display a bar/column.  It's essentially a melding of a statistical table and a bar/column chart, in one object/panel.  (You can do this in Excel for right-hand marginal histograms using conditional formatting to display a bar within the cell.  Screenshot example included below.  NOTE: the bottom marginal histogram was manipulated, because Excel doesn't let you do it that way.) I would like to be able to display a chart like the one in the included screenshot, so that we can quickly (visually) identify the highest-activity application and/or days.  Ideally, it would be nice to be able to put both margins on (one on the right, that sums the data in the row, and one on the bottom, that sums the data in the column).
My search ends with: ... | stats count(Request) as RequestCnt,  count(FailedRequest) FailedRequestCnt | eval FaildRequestPercentage =  RequestCnt / FailedRequestCnt * 100 How would I specify a tr... See more...
My search ends with: ... | stats count(Request) as RequestCnt,  count(FailedRequest) FailedRequestCnt | eval FaildRequestPercentage =  RequestCnt / FailedRequestCnt * 100 How would I specify a trigger for FaildRequestPercentage  > 10? How would I include: RequestCnt, FailedRequestCnt , and FaildRequestPercentage values in my alert message?
We are using Splunk DB Connect version 3.4.0 and schedule Jobs to run on cron. We are in Easter time zone.  When the Job is scheduled to run at certain time, it runs an hour later , than expected. ... See more...
We are using Splunk DB Connect version 3.4.0 and schedule Jobs to run on cron. We are in Easter time zone.  When the Job is scheduled to run at certain time, it runs an hour later , than expected. For example:  At 12:39pm EST the following Job "S_dbconect_test_job"  is scheduled to be run at 12:41pm EST, but internal log shows that the scheduling is logged at 11:39pm , not at 12:39pm, as result the jobs runs at 1:39pm EST. Here is the entry from internal log: 6/24/21 12:39:50.487 PM 2021-06-24 11:39:50.487 -0500 [dw-679 - PUT /api/inputs/S_dbconect_test_job] INFO org.easybatch.extensions.quartz.JobScheduler - Scheduling job org.easybatch.core.job.BatchJob@1956b4b7 with cron expression 0 41 12 ? * * host = host03 source = /opt/splunk/var/log/splunk/splunk_app_db_connect_server.log sourcetype = dbx_server Time zone of connection , used for the Job, is set to  US/Eastern: -04:00 Where does this one hour difference might come from? Appreciated any advises   
I am using a script that gives me some data in json format, I want to send this data to splunk. I can store the output of the script in a file but how can I send it to HTTP Event Collector? Couple ... See more...
I am using a script that gives me some data in json format, I want to send this data to splunk. I can store the output of the script in a file but how can I send it to HTTP Event Collector? Couple of things I tried but did not work: ------------------------------------------------ #!/bin/bash FILE="output.json" file1="cat answer.txt" curl -k "https://prd-pxxx.splunkcloud.com:8088/services/collector"  -H "Authorization: Splunk XXXXX"  -d  '{"event": "$file1", "sourcetype": "manual"}' ----------------------------------------------------------- curl -k "https://prd-pxxx.splunkcloud.com:8088/services/collector"  -H "Authorization: Splunk XXXXX"  -d  '{"event": "@output.json", "sourcetype": "manual"}' ------------------------------------------------------------- curl -k "https://prd-p-w0gjo.splunkcloud.com:8088/services/collector"  -H "Authorization: Splunk d70b305e-01ef-490d-a6d8-b875d98e689b"   -d '{"sourcetype":"_json", "event": "@output.json", "source": "output.json} ----------------------------------------------------------------- After trying this I understand that it literally sends everything specified in the event section. Is there a way I can send the content of the file or use a variable? Thanks in advance!
Hello, Hoping someone can help with a Field Extraction question regarding multi line text and capturing a specific value before a comma for each line. The text example is below where I am trying to... See more...
Hello, Hoping someone can help with a Field Extraction question regarding multi line text and capturing a specific value before a comma for each line. The text example is below where I am trying to get the Tagname for each line, but the Field Extraction is only applying to the first line. Testing in Rubular or Regex101 and it works fine. Tag: Tagname,Date,Value Tag: Tagname1,Date1,Value1 Tag: Tagname2,Date2,Value2 Tag: Tagname3,Date3,Value3 Tag: Tagname4,Date4,Value4 Tag: Tagname5,Date5,Value5 I've tried : Tag:\s(?<Tag>.+?), (?ms)Tag:\s (?<Tag>.+?), (?m)Tag: \s(?<Tag>.+?),   As well as a few others, but all seem to stop after the first capture. Any help would be appreciated, thanks!  
We have two Splunk environments: Splunk Enterprise and Splunk Cloud.  Splunk Cloud is our production system.  Splunk Enterprise is our development system where our private apps are developed, package... See more...
We have two Splunk environments: Splunk Enterprise and Splunk Cloud.  Splunk Cloud is our production system.  Splunk Enterprise is our development system where our private apps are developed, packaged, vetted, and then pushed to the Splunk Cloud.  One of our admins accidentally modified a private app dashboard through the Splunk Cloud Web UI.  Since then, we are unable to update that dashboard when new versions of the private app are loaded into Splunk Cloud.  Splunk also will not allow us to delete the dashboard. How do we overwrite a private app dashboard in Splunk Cloud with a new version of the app after the dashboard has been edited using the Web UI?