All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm creating a splunk dashboard and part of what I would like to do is create a drop down that is populated using a search query.  I've confirmed that my search query returns results for the time fra... See more...
I'm creating a splunk dashboard and part of what I would like to do is create a drop down that is populated using a search query.  I've confirmed that my search query returns results for the time frame I want and I've confirmed that some of the results contain the field that I want to use for label and value.  But for some reason my dropdown is disabled and has no data in it. Am I correct to assume that if the search query is returning results and at least some of those results contain the field that I want to use in my dropdown that my dropdown should populate and be usable?  Can anyone give me any suggestions as to why my dropdown isn't working? Note: I deliberately left out my search string and the field i'm using because I assumed my description above would be enough.  Please let me know if it would be helpful to include these values
I am comfortable with the rex command when straightforward text strings are involved. I've got something that is decidedly NOT a straightforward text string.  It is a substring in a larger log entry... See more...
I am comfortable with the rex command when straightforward text strings are involved. I've got something that is decidedly NOT a straightforward text string.  It is a substring in a larger log entry (not shown) and looks like this: RESULTVECTOR="{2106177} EMAAC02:0(16)/EMACC65:0(68)/BPOSTK01:0(476[11+436+11])/BPOSCC01:0(2072)/BPOSTK01:0(629[15+590+9])/BPOSCC02:0(867)/EMACC28:0(42)/BPOSRT01:0(101)/EMACC65:0(129)/BPOSRT10:0(2063152[15+2063087+31])/EMACC65:0(30)/EMAAC10:0(37884[13+37829+25])/EMACC51:0(23) The first part identifies complex substring part (RESULTVECTOR) and the overall response time for a transactions.   The rest is a set of sorta-name-value-pairs (delimited by "/") that tell me a <sub-process name>:<sub-process response code>(<sub-process response time>)[<optional set of sub-sub-process response times of arbitrary length, delimited by "+">]  I want to recursively process this string to, at a minimum, the total response time and a set of details for each sub-process (I am willing to ignore the sub-sub-process data for now). I can't get past the first sub-process.  My attempt at rex so far is: rex field=_raw max_match=100 " RESULTVECTOR=\"{(?<TOTAL_RESP>.*)} (?<A_PROC>\w+):(?<A_RC>\d+)\((?<A_RESP>\d+).*" Is it even possible to capture the data I need using rex?   
Hello Splunkers. I have a stream of logs going to Splunk that reports daily errors. The logs is as follows:   Exceptions Details App...............: WebApp Original Message..: The provided anti... See more...
Hello Splunkers. I have a stream of logs going to Splunk that reports daily errors. The logs is as follows:   Exceptions Details App...............: WebApp Original Message..: The provided anti-forgery token was meant for user "1234" but the current user is "". Server............: WebAppServer Service API URL...: https://xpto.systemname.com/WebAppApi/SelfService/FI.API.SelfService   I have these kinds of exceptions going on through the day and night and my main goal is to compile the type of exception, which URL happened, where (server name) and how many times it happened. So what I need is to extract the field after the :  I've tried...   index="MyIndex" | extract kvdelim=":", auto=f   ... as suggested in this cheat sheet but I couldn't manage to work. Any help/suggestions? Thank you in advance.
Hi All, I've ran a search and got these errors   I've checked the lookup and it's empty Is there a way to recover it? Thanks, Hen
Hello, I've noticed post upgrade to Splunk Enterprise 8.0.5 that NLP Text Analytics searches freeze when encountering accented characters as well as some additional characters such as: à “ ” ... See more...
Hello, I've noticed post upgrade to Splunk Enterprise 8.0.5 that NLP Text Analytics searches freeze when encountering accented characters as well as some additional characters such as: à “ ” – (long dash) I am certain there are more, but I just want to know how to make them compatible with the NLP Text Analytics searches.  I did not have this problem with Splunk 7.3.2 running Python 2.7.x. I am using lookups to put my data in, but the same happens when the data is coming from an index.  I tried creating and recreating the lookup with various methods to ensure that it's UTF-8 encoding but I could not resolve.  If I put one of the characters mentioned above into the pride_prejudice sample CSV files and it breaks that as well (to try it, use field "sentence" and search "| inputlookup pride_prejudice.csv | head 1" on the Counts dashboard). I have the following components installed: nlp-text-analytics - 1.1.0 Splunk_SA_Scientific_Python_linux_x86_64 - 2.0.2 Splunk_ML_Toolkit - 5.2.0 Does anybody know how to solve? Thanks! Andrew
What will be the query to get host reporting or not reporting in a table format for a dasboard from a lookup csv containing hostnames and ip......(within Time range of 30 days) i need the dashboard ... See more...
What will be the query to get host reporting or not reporting in a table format for a dasboard from a lookup csv containing hostnames and ip......(within Time range of 30 days) i need the dashboard should show result like below: host ip UF/syslog sourcetype last event time status abcd 1234 UF _internal **** logs are coming dddd 4444 syslog ffffff ***** logs are coming jjjjj 7676       logs are not coming
Is it so that you have to have numerical values for all the data in a bubble chart? I've got a table with 4 columns, but only one of them contains numbers. I want the Y-axis to be the product_n... See more...
Is it so that you have to have numerical values for all the data in a bubble chart? I've got a table with 4 columns, but only one of them contains numbers. I want the Y-axis to be the product_name, the X-axis to be the Country and the size of the bubble to be the sum(product_price) for the product. Looking around there seems to have been an issue before: https://community.splunk.com/t5/Splunk-Search/Bubble-Charts-input-data-structure-lacking-documentation/m-p/94620#M24419 <option name="charting.axisY">category</option>  But that does nothing, I still get the axises with numbers and not the text and all the bubbles piled up in the corner.
Splunk is too powerful. But i wish the search criteria language would have been more generic something like sql I have 3 buckets for error, warning and info for each source type. Need help from ... See more...
Splunk is too powerful. But i wish the search criteria language would have been more generic something like sql I have 3 buckets for error, warning and info for each source type. Need help from experts 1) to add condition in error bucket like this.     level="ERROR" or log contains any of these ("Failed","Exception","Fatal")       2) also in dashboard line chart if i clicked on the error line, it should actually take me those error logs. Is it possible ?       <dashboard> <label>application Name</label> <description>Spark application logs</description> <row> <panel> <title>logs</title> <chart> <title>Streaming Error Count</title> <search> <query>index=myindex sourcetype=mysourceType1 | timechart count as total_logs count(eval(level="INFO")) as total_info count(eval(level="WARN")) as total_warn count(eval(level="ERROR")) as total_error span=1h</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <option name="charting.chart">line</option> <option name="charting.chart.showDataLabels">minmax</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> </chart> </panel> </row> </dashboard>      
I have many lookup tables that I am working with and I am using the REST API to dynamically populate the lookup tables on a dashboard drop down.  The issue I am running into is that I am trying to ve... See more...
I have many lookup tables that I am working with and I am using the REST API to dynamically populate the lookup tables on a dashboard drop down.  The issue I am running into is that I am trying to verify if data already exists in one of the lookup tables.  I can use the inputlookup to search the lookup files but this is restricted to the subsearch limit of 10500, many of the tables are much larger than this.  So I have two questions... 1 - How can I specify a string and use the lookup search?  I have tried variations of, which hasn't worked. | eval search_term = item1 | lookup table1.csv item1 as column1 | search decription   2 - How can I use the following search to dynamically search all lookup tables and not use inputlookup to avoid the subsearch limit? | REST /services/data/lookup-table-files splunk_server=* | table title | search title=* | map search="|inputlookup $title$" | search Column1=$search_item$ | table Column1, Column2, Column3
Hi folks, I am new to splunk can you guys plz tell me : What will be the query to get host reporting or not reporting in a table format for dashboard from a lookup csv containing hostnames.
If I execute... | stats avg(mem_free_percent) as mfp by Region | fieldformat mfp=round(mfp, 1)."%" It will display values like 20.5% However it breaks my radial dials I am assuming because the val... See more...
If I execute... | stats avg(mem_free_percent) as mfp by Region | fieldformat mfp=round(mfp, 1)."%" It will display values like 20.5% However it breaks my radial dials I am assuming because the value gets converted to a string value. "fieldformat mfp=round(mfp, 1)" by itself still works in radial dials however 20.5% gets displayed as "20.5". How do I add the % sign and still maintain the integer value? Thanks -Mike
I have an index with events containing a field foo that can carry multiple numeric values 1,2,3 Looking to count all events where foo is either 1 or 2. have tried couple of options with eval and st... See more...
I have an index with events containing a field foo that can carry multiple numeric values 1,2,3 Looking to count all events where foo is either 1 or 2. have tried couple of options with eval and stats count but not getting there |stats count (eval(foo=1 OR foo=2)) as Foo_combined
I want to have a colour-coded column chart for the average amount of time that an incident has been open by its severity. So far, I think I have achieved most of this with the following search query... See more...
I want to have a colour-coded column chart for the average amount of time that an incident has been open by its severity. So far, I think I have achieved most of this with the following search query:   index=* | where match(status, "resolved") | stats earliest(creation_time) as creation_time, earliest(modification_time) as resolve_time, latest(severity) as severity by incident_id | eval creation_time_epoch = strptime(creation_time, "%Y/%m/%d %H:%M:%S") | eval resolve_time_epoch = strptime(resolve_time, "%Y/%m/%d %H:%M:%S") | eval timedifference_hours = ((resolve_time_epoch - creation_time_epoch) / 60) / 60 | stats avg(timedifference_hours) as timedifference_hours_average by severity | eval timedifference_hours_average = round(timedifference_hours_average, 2) | table severity, timedifference_hours_average   The problem is that, probably due to a flaw with my search query, the legend is not per severity and, seemingly as a result, the dashboard XML <option name="charting.fieldColors">{"low": 0x3391FF, "medium": 0xF8BE34, "high": 0xDF4D58}</option> is not applying. Can anyone point me in the right direction? Thanks.
I've been poking around Splunk Answers for a while today and can't quite match the scenario I've got. I have a 100 hosts in lookup And in splunk index, mostly reports 100 hosts but sometime few serv... See more...
I've been poking around Splunk Answers for a while today and can't quite match the scenario I've got. I have a 100 hosts in lookup And in splunk index, mostly reports 100 hosts but sometime few servers will miss reporting. I want to have a table with date and "ServersNotReporting" | inputlookup HostDetails.csv | table Host country datacenter  | search NOT [search index=_internal sourcetype="test.log" | stats dc(Host) AS host span=1d ] | eval Time = strftime(_time, "%Y-%d-%m") | fields - _time | table Time ServersNotReporting Probably my approach is wrong, but I don't know how to do this. Please help.. Thanks in Advance.
Hi, Do we have any documentation or artifacts for APPD instrumentation in Datapower & API-C 
Hi all, we are monitoring some log files in a Windows directory; we'd like to keep only events containing the word FAILURE and discard the rest, so we set a filter in our two heavy forwarders but it... See more...
Hi all, we are monitoring some log files in a Windows directory; we'd like to keep only events containing the word FAILURE and discard the rest, so we set a filter in our two heavy forwarders but it doesn't work, we ingest all data and nothing is discarded. Here our configuration files: props.conf     [bpm_metastorm] DATETIME_CONFIG=CURRENT SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 disabled=false TRANSFORMS-set=bpm_null,bpm_parsing     transforms.conf     [bpm_null] REGEX = . DEST_KEY = queue FORMAT = nullQueue [bpm_parsing] REGEX = FAILURE DEST_KEY = queue FORMAT = indexQueue     There aren't other stanzas with same names as above, so it shouldn't be a name conflict problem; and we set another similar filter to another sourcetype, that worked while this new one not. Do you have any suggestions? Thanks in advance
Hi we have Splunk 7.3.4 , the monitoring is running on Heavy Forwarder    I would like to extract the _time from the file name for example source="\\ILRNACYMSRV03\WebGWAssessResultsForRPA\Bot Stat... See more...
Hi we have Splunk 7.3.4 , the monitoring is running on Heavy Forwarder    I would like to extract the _time from the file name for example source="\\ILRNACYMSRV03\WebGWAssessResultsForRPA\Bot Status Reports\11-11-2020 07.00.17\CYMULATE_URL_11112020T0300136179Z_Status.csv" I have defined a new sourcetype as following  props.conf   [csv_timestampeval] BREAK_ONLY_BEFORE_DATE = INDEXED_EXTRACTIONS = csv INGEST_EVAL = _time==strptime(replace(replace(source,".*(?=\\\\\\)\\",""),"[\d]{4}Z_Status.csv",""),"CYMULATE_URL_%d%m%YT%H%M%S") KV_MODE = none LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 384 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TRUNCATE = 0 category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = 1  inputs.conf   [monitor://\\ILRNACYMSRV03\WebGWAssessResultsForRPA\Bot Status Reports\11-11-2020 07.00.17\*.csv] disabled = 0 index = test sourcetype = csv_timestampeval crcSalt = <SOURCE> initCrcLength = 1024   the file is not indexed  could you please assist  ?   
Hey folks,      I have what I believed would be a simple question, but it's turning out to be more of a challenge than expected.  This is on-prem Splunk Enterprise v7.2.4.2.      I have a view.  ... See more...
Hey folks,      I have what I believed would be a simple question, but it's turning out to be more of a challenge than expected.  This is on-prem Splunk Enterprise v7.2.4.2.      I have a view.  I've scheduled PDF delivery of that view each morning.  It works like a champ.  However, the time presented in the resulting PDF is in UTC (which is what the sending Splunk search head runs under).  OK, fine, so I added 'dispatchAs = owner' to the savedsearches.conf entry for the PDF delivery, and made sure that the view has 'owner = <my username>' in metadata/local.meta.  I've read and re-read the spec for savedsearches.conf, and I believe I'm interpreting it correctly.      Incidentally, when loading the view in the Splunk web UI as the same username from above, I get the correct times (adjusted for the local time zone).      However, the PDF continues to arrive using UTC (we're in Central time zone and everything is 6 hours off in the PDF).  I don't know what I'm missing, this really doesn't seem to be that hard.  I *think* I'm reading the docs correctly.      I would very much appreciate any hints, pointers, or clue-by-fours.  Thank you so much!   Chris  
Hi, I signed up for the 7-day Enterprise Security Sandbox trial. According to the web site, there is supposed to be sample data in the instance. However, there is nothing. Even worse, it looks li... See more...
Hi, I signed up for the 7-day Enterprise Security Sandbox trial. According to the web site, there is supposed to be sample data in the instance. However, there is nothing. Even worse, it looks like the instance didn't even deploy properly (see messages below from Splunk). One of the messages says to contact Splunk support to re-start the instance. However, I am not (yet) a Splunk customer, so cannot open a support ticket. How can I get a properly configured sandbox with sample data in it?   Thanks!   User 'sc_admin' triggered the 'enable' action on app 'sample_app', and the following objects required a restart: indexes11/12/2020, 2:07:56 PM Splunk must be restarted for changes to take effect. Contact Splunk Cloud Support to complete the restart.11/12/2020, 1:28:18 PM Health Check: Splunk server "si-i-0e1aa6ee38a60a908.prd-p-j2qgt.splunkcloud.com" does not meet the recommended minimum system requirements. Learn more.11/12/2020, 3:25:53 AM The search "Access - Geographically Improbable Access - Summary Gen" is related to the correlation search "Access - Geographically Improbable Access Detected - Rule" but it is not enabled even though the correlation search is; this will cause the correlation to fail11/12/2020, 3:20:00 AM The search "Access - Geographically Improbable Access - Summary Gen" is related to the correlation search "Access - Geographically Improbable Access Detected - Rule" but it is not enabled even though the correlation search is; this will cause the correlation to fail11/11/2020, 3:20:00 AM The search "Access - Geographically Improbable Access - Summary Gen" is related to the correlation search "Access - Geographically Improbable Access Detected - Rule" but it is not enabled even though the correlation search is; this will cause the correlation to fail11/10/2020, 3:20:00 AM The search "Access - Geographically Improbable Access - Summary Gen" is related to the correlation search "Access - Geographically Improbable Access Detected - Rule" but it is not enabled even though the correlation search is; this will cause the correlation to fail11/9/2020, 3:20:00 AM  
/en_sek/klarna