All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

See title. Searches still run but it constantly throws that error during every search.
Hi All, From one single value panel to another  single value( which needs to be hidden drilldown), can we have arrows .. so it will look like tree structure 
Hi team I need to apply a different condition in one panel of my dashboard depending on a selected column in other panel I am trying to assign a token value in my first panel depending on the col... See more...
Hi team I need to apply a different condition in one panel of my dashboard depending on a selected column in other panel I am trying to assign a token value in my first panel depending on the column I just select, and then use this token value in order to apply a specific condition in my second panel.   My first question is: What should be the token value that I have to set to my token in order to identify the clicked column? I mean, what token value I can select: $click.name$ $click.name2$   My second question is: How can I configure my query in the second panel in order to take in account the token value, and apply the specific condition based on the token value   Thanks in advance
Recently deployed this add-on, but it doesn't seem to bring back Traffic or URL logs like we did when using the TA-meraki & syslog. Are these not supported with the API-based mechanism, or is there... See more...
Recently deployed this add-on, but it doesn't seem to bring back Traffic or URL logs like we did when using the TA-meraki & syslog. Are these not supported with the API-based mechanism, or is there something I'm missing - like a setting on the Meraki end to include these logs? Thanks, Gord T.
I've setup Kinesis Firehose to push to Splunk HEC which is ingesting fine, however, I would like to add the logstream field from Cloudwatch to Splunk.   The code used in the Lambda can be found here:... See more...
I've setup Kinesis Firehose to push to Splunk HEC which is ingesting fine, however, I would like to add the logstream field from Cloudwatch to Splunk.   The code used in the Lambda can be found here: https://github.com/ptdavies17/CloudwatchFH2HEC        sourcetype = os.environ['SPLUNK_SOURCETYPE'] return_message = '{"time": ' + str(log_event['timestamp']) + ',"host": "' + \ arn + '","source": "' + filterName + ':' + loggrp + '"' return_message = return_message + ',"sourcetype":"' + sourcetype + '"' return_message = return_message + ',"event": ' + \ json.dumps(log_event['message']) + '}\n' return return_message + '\n'         The code block above works and it returns the formatted message:       { "time": 1234567891011, "host": "arn", "source": "some_source", "sourcetype": "some_source_type", "event": "event_data" }        When I add the logstream to the code block as such:       sourcetype = os.environ['SPLUNK_SOURCETYPE'] return_message = '{"time": ' + str(log_event['timestamp']) + ',"host": "' + \ arn + '","source": "' + filterName + ':' + loggrp + '"' return_message = return_message + ',"logstream":"' + logstrm + '"' return_message = return_message + ',"sourcetype":"' + sourcetype + '"' return_message = return_message + ',"event": ' + \ json.dumps(log_event['message']) + '}\n' return return_message + '\n'       The changes get processed by the transformation Lambda correctly( it is a valid json and the logs in Cloudwatch confirm that):       { "time": 1234567891011, "host": "some_host", "source": "some_source", "logstream": "logstream_in_amazon", "sourcetype": "some_source_type", "event": "some_event_data" }       But the Kinesis Delivery Stream to Splunk errors out with:       The data is not formatted correctly. To see how to properly format data for Raw or Event HEC endpoints, see Splunk Event Data (http://dev.splunk.com/view/event-collector/SP-CAAAE6P#data). HecServerErrorResponseException{serverRespObject=HecErrorResponseValueObject{text=Invalid data format, code=6, invalidEventNumber=0}, httpBodyAndStatus=HttpBodyAndStatus{statusCode=400, body={"text":"Invalid data format","code":6,"invalid-event-number":0}...       Is there something I'm missing? I've tried tons of changes to the Lambda code, but all of them return this error. Maybe I cannot add a field from CW to Splunk this way? I am new to Splunk, so I might not be asking the right question or might be missing something, but any help would be much appreciated.
In the context of connecting Splunk Cloud and Phantom. Does Phantom/Splunk SOAR support mTLS?
I want to create a drilldown that searches for the timezone displayed by the Timechart command. How should I create the time range specification for the drilldown setting item?
I would like to search from 600 seconds before to 600 seconds after the time specified in the time picker on the dashboard. Is there a good idea? The time picker is also used in another panel, so ... See more...
I would like to search from 600 seconds before to 600 seconds after the time specified in the time picker on the dashboard. Is there a good idea? The time picker is also used in another panel, so it is in response to the request that I didn't want to place multiple time pickers. === This SPL did not pass. === index = _internal earliest = $ field1.earliest $ --600 latest = $ field1.earliest $ + 600
Hello, I have created a few indexes, each containing data only from one source with one sourcetype. From a search performance point of view - Is it necessary to include the sourcetype in each searc... See more...
Hello, I have created a few indexes, each containing data only from one source with one sourcetype. From a search performance point of view - Is it necessary to include the sourcetype in each search if there is only one sourcetype "associated" with a specific index ? Is Splunk's internal "engine" slower when I do not specify that ?
Hi In big data can we replace hadoop by Splunk ? and why? Does Splunk do all hadoop functionality?
Logs are not getting in from Linux machine I am using Splunk cloud trial and in a Linux machine and installed universal forwarder and added  monitor path as well. But no luck.
Hi All, We are observing high number of parsing issues on sourcetype= symantec:email:cloud:atp. We haven't done any changes in Add-on. Please suggest how to resolve this issue. how to identify exact... See more...
Hi All, We are observing high number of parsing issues on sourcetype= symantec:email:cloud:atp. We haven't done any changes in Add-on. Please suggest how to resolve this issue. how to identify exact which events are facing this issue and how to resolve it. Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Wed Jun 29 10:52:21 2022). Context: source=/opt/splunk/etc/apps/TA-symantec_email/bin/symantec_collect_atp.py|host=s|symantec:email:cloud:atp| 06-29-2022 10:53:30.862 +0000 WARN DateParserVerbose [27921 merging] - The TIME_FORMAT specified is matching timestamps (INVALID_TIME (1656499945449)) outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source=/opt/splunk/etc/apps/TA-symantec_email/bin/symantec_collect_atp.py|host=|symantec:email:cloud:atp| Please find the props.config file setting for symantec:email:cloud:atp    
Hey guys , I need last 30 days stats for the use-cases that did not fire up on the ES console. Below is the query that i designed  `notable` | search NOT `suppression` | timechart usenull=f span=... See more...
Hey guys , I need last 30 days stats for the use-cases that did not fire up on the ES console. Below is the query that i designed  `notable` | search NOT `suppression` | timechart usenull=f span=30d count by rule_name | where _time >= relative_time(now(),"-1mon") But not getting the desired results as they are only populating one specific date into it. Can someone please refine the above query as i need the trend analysis for the usecases ?  
I'm confused a bit. I use CIM datamodels. The "tag" field is both a filter for choosing events applicable to a particular datamodel and is an attribute within the datasets. My datamodel is accele... See more...
I'm confused a bit. I use CIM datamodels. The "tag" field is both a filter for choosing events applicable to a particular datamodel and is an attribute within the datasets. My datamodel is accelerated. When I do a simple | from datamodel:Authentication.Failed_Authentication I get events with some fields selected (in fast mode) - I assume that those are fields correspoding to the dataset-defined fields. What's most important for me here is that the "tag" field is populated as with any normal search. But when I do  | tstats summariesonly=t dc(Authentication.tag) as tag where nodename=Authentication.Failed_Authentication by Authentication.app Authentication.src   I get zero count for dc(Authentication.tag). It would suggest that the tags are not indexed . All other fields that I tried (like src, dest, user and so on) seem to be indexed OK and I'm able to tstats for them but the tag field is not. Is it treated somehow differently? Or is it because tag is multivalued?
Hi all! I'm trying to run multiple macros in the same search and eventually aggregate the results from each execution into a table. My current search looks like this, which seems to work fine for... See more...
Hi all! I'm trying to run multiple macros in the same search and eventually aggregate the results from each execution into a table. My current search looks like this, which seems to work fine for a single execution of the histperc macro (Prometheus integration provided)   | mstats rate(_value) AS requests WHERE "index"="MyIndex" AND metric_name="MyMetricNameRegex" BY metric_name, le | stats sum(requests) AS total_requests BY metric_name, le | `histperc(0.5, total_requests, le, metric_name)` | rename histperc as Median | table metric_name Median 90th 75th 25th 10th   I think the issue is that total_requests value is not passed down after the | `histperc(0.5, total_requests, le, metric_name)` row but i am not sure if this is the case. Also not sure if rename is by reference or copy and what would eventually happen by having many renames and overrides of the histperc value like below. The histperc macro looks like this:   sort $groupby$, $le$ | eventstats max($hist_rate$) as total_hist_rate, last($le$) as uppermost_bound, count as num_buckets by $groupby$ | eval rank=exact($perc$)*total_hist_rate | streamstats current=f last($le$) as gr, last($hist_rate$) as last_hist_rate by $groupby$ | eval gr=if(isnull(gr), 0, gr), last_hist_rate=if(isnull(last_hist_rate), 0, last_hist_rate) | where $hist_rate$ >= rank | dedup $groupby$ | eval res=case(lower(uppermost_bound) != "+inf" or num_buckets < 2, "NaN", lower($le$) == "+inf", gr, gr == 0 and $le$ <= 0, $le$, true(), exact(gr + ($le$-gr)*(rank - last_hist_rate) / ($hist_rate$ - last_hist_rate))) | fields $groupby$, res | rename res as "histperc"   What i want to do is something like this:   | mstats rate(_value) AS requests WHERE "index"="MyIndex" AND metric_name="MyMetricNameRegex" BY metric_name, le | stats sum(requests) AS total_requests BY metric_name, le | `histperc(0.5, total_requests, le, metric_name)` | rename histperc as Median | `histperc(0.9, total_requests, le, metric_name)` | rename histperc as 90th | `histperc(0.1, total_requests, le, metric_name)` | rename histperc as 10th | `histperc(0.75, total_requests, le, metric_name)` | rename histperc as 75th | `histperc(0.25, total_requests, le, metric_name)` | rename histperc as 25th | table metric_name Median 90th 75th 25th 10th     Thankful for all help!  
I'm struggling to create a search using an inputlookup and multiple NOT searches. Background: I have an inputlookup that is a list of telephone numbers, I want to search my recent telephone log fil... See more...
I'm struggling to create a search using an inputlookup and multiple NOT searches. Background: I have an inputlookup that is a list of telephone numbers, I want to search my recent telephone log files and get a list of entries from that inputlookup that haven't made or received calls. My current query is as a follows:     | inputlookup CUCM_lboro_assigned_numbers_27_6_22.csv | rename DN AS phone | search NOT [ search index=cucm cdrRecordType=1 duration>0 | rename callingPartyNumber AS phone | table phone] AND NOT [ search index=cucm cdrRecordType=1 duration>0 | rename originalCalledPartyNumber AS phone | table phone] AND NOT [ search index=cucm cdrRecordType=1 duration>0 | rename finalCalledPartyNumber AS phone | table phone]     The problem with it is that the three queries are being individually 'search NOT' against the inputlookup, so if a number doesn't place a call (appears as callingPartyNumber), but does receive a call (originalCalledPartyNumber or finalCalledPartyNumber), it still gets listed. I only want to see numbers that haven't made calls AND haven't received calls. It's almost as if I need to build an intermediate data set of numbers that are returned from all three subsearches, then 'search NOT' that against the inputlookup. But I don't know how to do that. Any suggestions?
This is a tip, not a question.  When you have a large solution, you can see on the log data: what the UF name that data comes from, what Index server data are stored on.  What you do not see are w... See more...
This is a tip, not a question.  When you have a large solution, you can see on the log data: what the UF name that data comes from, what Index server data are stored on.  What you do not see are what Heavy Forwarders data are passing trough.  Here is an app that do just that.  Adding an extra field does not use extra license, since only _raw length are calculated. Make an app that you sends to all HF servere: app name: set_name_gateway_hf props.conf (will apply to all data)     [source::...] TRANSFORMS_set_hf_server_name = set_hf_server_name     transforms.conf     [set_hf_server_name] INGEST_EVAL = splunk_hf_name := splunk_server     This will use the Splunk HF server name from etc/system/local/server.conf
I want to configure karate report to see as report in splunk . How can i achieve it
After upgrading to Splunk Enterprise 9.0 I do get the following message from several Dashboard. This dashboard view is deprecated and will be removed in future versions of Splunk software. Open the... See more...
After upgrading to Splunk Enterprise 9.0 I do get the following message from several Dashboard. This dashboard view is deprecated and will be removed in future versions of Splunk software. Open the updated view of this dashboard. If I click the link, it just opens the same Dashboard, expect url is added with this: xmlv=1.1 Example:       https://myserver.com/en-GB/app/Search/test_locations?earliest=-24h%40h&latest=now https://myserver.com/en-GB/app/Search/test_locations?xmlv=1.1&earliest=-24h%40h&latest=now        Have tried to find how to fix the Dashboard, but can not find what to change. Anyone have idea on how to fix this?
I have a dump.json file that collects events in JSON format: {"key":"value","key":"value","key":"value","key":"value"....} I have no problem processing it however each line has 400 Keys and I only... See more...
I have a dump.json file that collects events in JSON format: {"key":"value","key":"value","key":"value","key":"value"....} I have no problem processing it however each line has 400 Keys and I only need 30 of them in splunk. How can I tell the Universal forwarder to only send those 30 fields to my Indexers? Ingesting all the 400 fields consumes a lot of resources and license.