All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  We are looking to monitor DMZ servers in a SAAS controller. How can we monitor them? Do we have any documentation or any parameters we need to Add in Agent startup scripts and all? ^ Edited... See more...
Hi  We are looking to monitor DMZ servers in a SAAS controller. How can we monitor them? Do we have any documentation or any parameters we need to Add in Agent startup scripts and all? ^ Edited by @Ryan.Paredez to clean up the post title. 
Hello all, I'm using a search that baselines user activity (looks back in time). But I've noticed that sometimes the results are incomplete, and this messes with the next search in the pipeline. ... See more...
Hello all, I'm using a search that baselines user activity (looks back in time). But I've noticed that sometimes the results are incomplete, and this messes with the next search in the pipeline. Does anyone know how to "abort" (and not update) the lookup file if any errors occurred during the search? Thanks so much.
Hi, I am trying to highlight values in my table but I am having trouble implementing it because the table cells can either be single-value cells or multi-value cells. If I only needed to highli... See more...
Hi, I am trying to highlight values in my table but I am having trouble implementing it because the table cells can either be single-value cells or multi-value cells. If I only needed to highlight single-value cells then I can use the Splunk example "Table Cell Highlighting" from the "simple_xml_examples" Splunk app. This works fine for me when highlighting table cells that only have one value. If I only needed to highlight each value in multi-value cells in my table, then I could use the example from the link below which also works perfectly: https://answers.splunk.com/answers/694420/is-it-possible-to-highlight-a-value-within-a-multi-1.html My problem is that my cells can either be single-value or multi-value so I have to write a script that will be able to highlight the cell/value whether it is a multi-value or single-value cell. For example, if I had the following field/values: Field_A = Apple Field_B = Banana Field_C = Orange, Apple (lets say this is a multi-value field). If I wanted to highlight all "Apple" values in my table, I would expect to see the following: In a single-value cell, you can see that the whole cell is highlighted (Field_B). In a multi-value cell, you can see that the just the value is highlighted (Field_D). I've tried combining code from both js scripts but have had no luck so far. I've also tried using the two separate js files on the dashboard which worked at the beginning but later I noticed that, in some cases, it was displaying multi-value cells as comma separated single-value cells. Did anyone implement this before? Thanks!
I am trying to create a dashboard that shows a red box around a failed or down state, and a green box around passed or up state. Currently, I am getting the dashboard, but the boxes will only show as... See more...
I am trying to create a dashboard that shows a red box around a failed or down state, and a green box around passed or up state. Currently, I am getting the dashboard, but the boxes will only show as black. Could someone please show me what I am doing wrong? <dashboard> <label>Website State</label> <row> <panel> <single> <search> <query>index="amazon-aws" sourcetype="aws:cloudwatchlogs" state_new=* AND site_location=* |eval state_new=if(state_new=="PASSED","up","down") |eval range=if(state_new=="PASSED","low","severe") |stats latest(state_new) AS state_new BY site_location</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="field">range</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051", "0x0877a6", "0xf8be34", "0xf1813f", "0xdc4e41"]</option> <option name="rangeValues">[0,30,70,100]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">small</option> <option name="trellis.splitBy">site_location</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single> </panel> </row> </dashboard>
We have an Ansible script that rebuilds/reindexes etc a Splunk indexer, if for some reason it implodes on itself. We also have incremental backups of the Splunk databases (for this question lets sa... See more...
We have an Ansible script that rebuilds/reindexes etc a Splunk indexer, if for some reason it implodes on itself. We also have incremental backups of the Splunk databases (for this question lets say "Data1"). While the script can rebuild the server, what is the best way to add back those databases if a server is rebuilt so we do not lose all the data we have saved? Thanks in advance for any assistance.
We are using calendar visualization for showing events in dashboard. I have tried to add drill-down behavior by using click.value. This works perfectly if i don't use base search. Once i switch to ba... See more...
We are using calendar visualization for showing events in dashboard. I have tried to add drill-down behavior by using click.value. This works perfectly if i don't use base search. Once i switch to base search it click.value works only for first event. Is there a work around for this issue ? <row> <panel> <viz type="calendar_app.calendar"> <title>Calendar View - click_value: $selected_value$ date: $date$</title> <search base="pto_search"> <query>|eval _time=time | search dataType=ptoData | timechart span=1d count by resourceName</query> </search> <option name="calendar_app.calendar.calendarView">month</option> <option name="calendar_app.calendar.showWeekNumbers">false</option> <option name="calendar_app.calendar.showWeekends">false</option> <option name="height">550</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="selected_value">$click.value$</set> <eval token="date">strftime($click.value$,"%d-%b-%Y")</eval> </drilldown> </viz> </panel> </row>
Hey Answers, I have an endpoint question if anyone has the knowledge to enlighten me. I have a client that was designing searches for an admin dashboard and crafted two based on existing monitori... See more...
Hey Answers, I have an endpoint question if anyone has the knowledge to enlighten me. I have a client that was designing searches for an admin dashboard and crafted two based on existing monitoring console one to determine amount of space used in their storage: One is trying to determine free disk space on a partition and is referencing /services/server/status/partitions-space. The result is that they’re using 7.5 TB of their 8 TB’s in the drive. The other is trying to determine free index space and is referencing /services/data/index-volumes. The result of that one is that they’re using 5.2 TB of their 8 TB’s in the drive. When I log on to the server and do a “df -h”, that drive says it’s using 5.3TB of 8TB allocated, so it seems like the “index-volumes” one is correct, but he’s wondering what the “partitions-space” one is looking at then and where that extra data is coming from. I’m stumped as I thought that it would be “partitions-space” being hardware level storage and “index-volumes” being the Splunk related storage. Any insight into what the partitions-space one might be doing differently? Thanks so much, Austin
I can't figure out why lsof.sh is running every minute. Here's the "btool inputs list --debug" output for lsof: /opt/splunkforwarder/etc/apps/DS2-ns2-Splunk_TA_nix-cre/local/inputs.conf [script:/... See more...
I can't figure out why lsof.sh is running every minute. Here's the "btool inputs list --debug" output for lsof: /opt/splunkforwarder/etc/apps/DS2-ns2-Splunk_TA_nix-cre/local/inputs.conf [script:///opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/lsof.sh] /opt/splunkforwarder/etc/system/default/inputs.conf _rcvbuf = 1572864 /opt/splunkforwarder/etc/apps/Splunk_TA_nix/local/inputs.conf disabled = 1 /opt/splunkforwarder/etc/system/local/inputs.conf host = c20sbap01l01 /opt/splunkforwarder/etc/apps/DS2-ns2-Splunk_TA_nix-cre/local/inputs.conf index = cre_linux /opt/splunkforwarder/etc/apps/Splunk_TA_nix/local/inputs.conf interval = 600 /opt/splunkforwarder/etc/apps/Splunk_TA_nix/default/inputs.conf source = lsof /opt/splunkforwarder/etc/apps/Splunk_TA_nix/default/inputs.conf sourcetype = lsof Here's my splund.log output: 10-10-2019 16:07:12.898 +0000 INFO ExecProcessor - New scheduled exec process: /opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/lsof.sh 10-10-2019 16:07:12.898 +0000 INFO ExecProcessor - interval: 60000 ms I've tried restarting splunk to no effect... Notice that the interval is set to 600 (600 seconds) in the btool output, but 60000 (60 seconds) in the splunkd.log output. I'll try interval = -1 next, and then a single app after that.
Hello, I am currently trying to relate "front" logs to "back" logs depending on their sessionIds and their timestamps in order to understand the errors I am getting (putting face to face "front re... See more...
Hello, I am currently trying to relate "front" logs to "back" logs depending on their sessionIds and their timestamps in order to understand the errors I am getting (putting face to face "front results" and "back results") . The logic "flow" is as follows : I am looking for service A logs that returned a 400 http code ("front" logs, so I need to be within my first index : let's call it front_index) For each log (one log = one error that occured), I want to extract its timestamp and its sessionId For each row I am getting, I want to be able to look for "back" logs (which means I am switching to my second index : let's call it back_index) depending on the timestamp and the sessionId. Each "front" logs can have several "back" logs. Finally, I want to be able to print some details such as the timestamp, the sessionId, some detailed errorCode if present, count if relevant, etc... But that's not the point If I am doing it manually, here are the two searches I am running : search 1 : index=front_index sourcetype=access_combined "/url/of/my/service" http_response_code=400 results 1 : list of log where I can manually extract the sessionId and the timestamp of each log I want to analyse search 2 : index="back_index" ** ** results 2 : I am getting different kind of logs that I manually read in order to extract the information I am looking for This work well, but on large amount data, it's just.. not the way it should be done So here is what I tried in order to help me going faster : search 3 : index=front_index sourcetype=access_combined "/url/of/my/service" http_response_code=400 | table hour, minute, sessionId | map search="search index=back_index $hour$:$minute$ $sessionId$ | table _timestamp, session, errorCode" expectation : I am expecting the first part to extract for each log found the hour, the minute and the sessionId in the front_index and it seems to be fine but then I want the second search to iterate on each row of the first one and to look for all the logs it can find in the back_index related to the timestamp (the minute is precise enough as my timestamps logs don't always perfectly match) and the sessionId. My issue seems to be that I can't change the index I am working on. Every data I retrieve are from the front_index even if I know the data I am looking for are there. A first step would be to get data from both indexes in the final list of events (or at least from the back_index as this is from there that I will get the details I want). And I can't figure why I can't do that. I tried to use wildcards in order to search in both indexes as their names are partially the same, but it does not seems to work. I tried to look the different subjects related to the map command, but I did not find what could help me (or I missed it) or worse, I totally misunderstood something about the command. The map command seems to be the right way to do what I am trying to do, but.. if there is a better/simpler way, I am also interested of course. Thanks for your help, b
My query index=main source=secure.log sourcetype=* | stats earliest(_time) as start, latest(_time) as stop | eval start=strftime(start, "%m/%d/%y") | eval stop=strftime(stop, "%m/%d/%y") | ... See more...
My query index=main source=secure.log sourcetype=* | stats earliest(_time) as start, latest(_time) as stop | eval start=strftime(start, "%m/%d/%y") | eval stop=strftime(stop, "%m/%d/%y") | eval days = round((start-stop)/86400). Please refer my below result. start stop 11/16/18 11/23/18 Here i can see start and stop date but want to find difference between start and stop so i can found number of days gap between them. So in above result i wants days column and difference is 7 days. But days column is not coming here. Please suggest.
Dear Sirs, I am using lookup to enrich my event data on the fly, and it seems to work fine. However, every invocation of the function lookup produces a warning to the splunkd.log as follows: 06-0... See more...
Dear Sirs, I am using lookup to enrich my event data on the fly, and it seems to work fine. However, every invocation of the function lookup produces a warning to the splunkd.log as follows: 06-04-2020 18:39:27.136 +0300 WARN CsvDataProvider - Unable to find filename property for lookup=splunk-installation-info.csv will attempt to use implicit filename. The search producing the above is | rest splunk_server=lic /services/licenser/slaves | lookup splunk-installation-info.csv splunk_uuid AS title OUTPUT BU It works, but I am puzzled by the warning. I would like to get rid of it, so how do I tell Splunk the explicit filename? Best regards, Petri
We're testing the Getwatchlist add-on and it's working very well - https://splunkbase.splunk.com/app/635/ Is the Getwatchlist add-on being supported? Is there may be a better alternative?
I was surprised to find that a user with read-only permissions can delete a report. Surely my Splunk set up is incorrect? I have an App representing a collection of related reports, alerts, dashboa... See more...
I was surprised to find that a user with read-only permissions can delete a report. Surely my Splunk set up is incorrect? I have an App representing a collection of related reports, alerts, dashboards, etc. Authorized users with no special permissions can create and edit their reports in this App (happy days). A separate user that we call "summariser" has permissions for all apps and is used to create and run summary index populating activities. We do this separately so that we can give the summariser special resource allowances. Up until now, these reports were private, which is an issue as the ordinary users would like to see what is in the SI populating searches so they can suggest changes, etc. So, I changed the permissions to make the SI populating reports as shared in App and read-only by the App User's role. This does seem to work as it becomes readable, runnable, and yet not saveable. This is exactly what I want but what surprised me is that the read-only user can DELETE the report. Surely delete should be considered a WRITE operation and not visible, or perhaps some other interaction is allowing this. Please help me fix this. Note: This is on Splunk Enterprise 8.0.3 having just upgraded from 7.2.4 3 days ago... perhaps it is a bug?
I am trying to index a CSV file from UF, which contains some historical data. Below is the sample of the events. Somehow the events are not getting indexed based on the timestamp from the CSV file. I... See more...
I am trying to index a CSV file from UF, which contains some historical data. Below is the sample of the events. Somehow the events are not getting indexed based on the timestamp from the CSV file. Instead, they are getting indexed with current time (all the events having current timestamp) and not the timestamp from the time field in the CSV. How can I fix this issue? I want to index the events based on the time field that goes to Jan to March 2020. Please help to resolve this? I have attached the screenshot also for your reference. Time,Mbps_IN,Mbps_OUT 01/01/2020 0:00,17222030,874306 02/01/2020 0:00,19368200,1504505 03/01/2020 0:00,15194740,150084 04/01/2020 0:00,4768362,1790559 05/01/2020 0:00,57691290,6339732 06/01/2020 0:00,44419200,2114772 07/01/2020 0:00,16432560,1144577 08/01/2020 0:00,9053104,23321280 09/01/2020 0:00,16265580,12490060 10/01/2020 0:00,2274004,4886436 11/01/2020 0:00,28840920,1388473 12/01/2020 0:00,6569902,6743890 13/01/2020 0:00,9766315,31771390 14/01/2020 0:00,8418418,2619432 15/01/2020 0:00,8751632,4382776 16/01/2020 0:00,22305280,8519139 17/01/2020 0:00,2989921,157784 18/01/2020 0:00,5307088,225203 19/01/2020 0:00,21432030,22773270 20/01/2020 0:00,29338980,2971322 21/01/2020 0:00,9230931,2120051 22/01/2020 0:00,7299774,10691780 23/01/2020 0:00,50019440,6489089 24/01/2020 0:00,3431143,241807 25/01/2020 0:00,5989488,830827 26/01/2020 0:00,77886710,7772389 27/01/2020 0:00,6841259,23842100 28/01/2020 0:00,79912540,26599700 29/01/2020 0:00,50530910,5565867 30/01/2020 0:00,21047160,6741192 31/01/2020 0:00,10868270,784867 01/02/2020 0:00,7047898,1671952 02/02/2020 0:00,67265450,8155953 03/02/2020 0:00,36689240,5077973 My inputs.conf : [monitor:///home/splunk/bw_history/exec_hist.csv] sourcetype = exec index = testindex crcSalt = <SOURCE> My props.conf : [exec] DATETIME_CONFIG = NONE INDEXED_EXTRACTIONS = csv KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Structured disabled = false pulldown_type = true TIMESTAMP_FIELDS = Time
I have a column called "message" which has duplicate records in it. I want to create a new column named "serial" beside it to have a serial no. for the message. For example: If the first and seco... See more...
I have a column called "message" which has duplicate records in it. I want to create a new column named "serial" beside it to have a serial no. for the message. For example: If the first and second-row message columns are the same then the new column "serial" should have 1 and 1 in it. If not identical then it should have the serial number as 1 and 2, and continue the serial no. for other records based on the message column uniqueness. Example: Since my first two records has same value in message as arran I had the value as 1 for it wherever it appears in the table. If you see the message "flex" appears in different row but I consider the serial number as 2 wherever it appears.
I am trying to send Meraki Alerts to Splunk HEC Endpoint. Please refer this URL to understand how we send Meraki alerts to receiving services. https://developer.cisco.com/meraki/webhooks/#!introduc... See more...
I am trying to send Meraki Alerts to Splunk HEC Endpoint. Please refer this URL to understand how we send Meraki alerts to receiving services. https://developer.cisco.com/meraki/webhooks/#!introduction/overview I need to specify the Splunk endpoint and the shared secret in the Meraki webhook alert page as expected by Meraki. And here are the following details" Webhook URL: Splunk Public Endpoint DNS(Backend will be heavy forwarder:8088)/services/collector/raw Shared Secret: HEC token in that Heavy forwarder Now when I hit the test option, the Meraki alerts are not flowing into Splunk and on detailed log Splunk analysis, we get the below error in our splunkd.log : 06-03-2020 17:12:23.556 +0200 ERROR HttpInputDataHandler - Failed processing http input, token name=n/a, channel=n/a, source_IP=****, reply=2, events_processed=0, http_input_body_size=878 I could see that Meraki is not able to send the shared secret key with Splunk token embedded and hence failing. Any suggestion on fixing this would be of great help.
Hi, I'm using the Splunk App for VMware version 3.4.5 and facing an issue with p_average_cpu_coreUtilization_percent metric in virtual machine detail dashboard. Other metrics are populating, but t... See more...
Hi, I'm using the Splunk App for VMware version 3.4.5 and facing an issue with p_average_cpu_coreUtilization_percent metric in virtual machine detail dashboard. Other metrics are populating, but there are no graphs for p_average_cpu_coreUtilization_percent metric. Best, Sebastian
We categorize log events using event types and assign them to people to address the issues using tags. Our events are generally exception stacktraces (Java). Our event types are basically a search... See more...
We categorize log events using event types and assign them to people to address the issues using tags. Our events are generally exception stacktraces (Java). Our event types are basically a search by two fields (source and message). What we do now is we look for events which don't match to any existing event type and create a new event type for them. What we would like to do is to automate the event type creation process. We have a volume of event types we have already created as a learning material. Is there any type of AI integration for this purpose in Splunk?
I would like to take the following search that generates the hashes and outputs the lookup: index=windows source="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational" Image=* | fields Hashes | eva... See more...
I would like to take the following search that generates the hashes and outputs the lookup: index=windows source="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational" Image=* | fields Hashes | eval hash=split(Hashes,",") | mvexpand hash | dedup hash | rex field=hash "(?<type>[^=]+)" | rex field=hash "=(?<hash>[^=]+)" | table hash | outputlookup append=true hashes.csv The output of the hashes.csv looks like this: hash 29B7D02A3B5F670B5AF2DAF008810863 96BEC668680152DF51EC1DE1D5362C64C2ABA1EDA86F9121F517646F5DEC2B72 D7AB69FAD18D4A643D84A271DFC0DBDF FF79D3C4A0B7EB191783C323AB8363EBD1FD10BE58D8BCC96B07067743CA81D5 601BDDF7691C5AF626A5719F1D7E35F1 4ED2A27860FA154415F65452FF1F94BD6AF762982E2F3470030C504DC3C8A354 9D59442313565C2E0860B88BF32B2277 How do I now take the hashes.csv and constantly add new unique hashes to it?
Hi Splunk colleagues, I'm having a problem with multiselect in my dashboards. Here's the code of the multiselect: <input type="multiselect" token="bap" searchWhenChanged="false"> <label>BAP</l... See more...
Hi Splunk colleagues, I'm having a problem with multiselect in my dashboards. Here's the code of the multiselect: <input type="multiselect" token="bap" searchWhenChanged="false"> <label>BAP</label> <fieldForLabel>BAP</fieldForLabel> <fieldForValue>BAP</fieldForValue> <search base="base"> <query>| search BAP IN("$form.bap$") | dedup BAP | table BAP</query> </search> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>,</delimiter> <choice value="*">Todos</choice> <prefix>(</prefix> <suffix>)</suffix> </input> The thing is that if I pass information through this token (form.bap) the value's prefix and suffix are not appearing and my searches are returning no results. This is how I look for the information in the token on my searches: | search BAP IN("$form.bap$") And this is how it appears (in this case, the values that I'm selecting are "BI" and "Core"): | search BAP IN ("BI,Core") As you can see, no quotes are added in between the two values, therefore no results found. I tried to change the way that I use to look for the information of the token (just with | search $form.bap$ and adding the "BAP IN" part on the prefix) but it's not working either. If you need more information about this or if the explanation is not as clear as possible, let me know! Thanks in advance,