All Topics

Top

All Topics

Hi All, We are observing high number of parsing issues on sourcetype= symantec:email:cloud:atp. We haven't done any changes in Add-on. Please suggest how to resolve this issue. how to identify exact... See more...
Hi All, We are observing high number of parsing issues on sourcetype= symantec:email:cloud:atp. We haven't done any changes in Add-on. Please suggest how to resolve this issue. how to identify exact which events are facing this issue and how to resolve it. Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Wed Jun 29 10:52:21 2022). Context: source=/opt/splunk/etc/apps/TA-symantec_email/bin/symantec_collect_atp.py|host=s|symantec:email:cloud:atp| 06-29-2022 10:53:30.862 +0000 WARN DateParserVerbose [27921 merging] - The TIME_FORMAT specified is matching timestamps (INVALID_TIME (1656499945449)) outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source=/opt/splunk/etc/apps/TA-symantec_email/bin/symantec_collect_atp.py|host=|symantec:email:cloud:atp| Please find the props.config file setting for symantec:email:cloud:atp    
Hey guys , I need last 30 days stats for the use-cases that did not fire up on the ES console. Below is the query that i designed  `notable` | search NOT `suppression` | timechart usenull=f span=... See more...
Hey guys , I need last 30 days stats for the use-cases that did not fire up on the ES console. Below is the query that i designed  `notable` | search NOT `suppression` | timechart usenull=f span=30d count by rule_name | where _time >= relative_time(now(),"-1mon") But not getting the desired results as they are only populating one specific date into it. Can someone please refine the above query as i need the trend analysis for the usecases ?  
I'm confused a bit. I use CIM datamodels. The "tag" field is both a filter for choosing events applicable to a particular datamodel and is an attribute within the datasets. My datamodel is accele... See more...
I'm confused a bit. I use CIM datamodels. The "tag" field is both a filter for choosing events applicable to a particular datamodel and is an attribute within the datasets. My datamodel is accelerated. When I do a simple | from datamodel:Authentication.Failed_Authentication I get events with some fields selected (in fast mode) - I assume that those are fields correspoding to the dataset-defined fields. What's most important for me here is that the "tag" field is populated as with any normal search. But when I do  | tstats summariesonly=t dc(Authentication.tag) as tag where nodename=Authentication.Failed_Authentication by Authentication.app Authentication.src   I get zero count for dc(Authentication.tag). It would suggest that the tags are not indexed . All other fields that I tried (like src, dest, user and so on) seem to be indexed OK and I'm able to tstats for them but the tag field is not. Is it treated somehow differently? Or is it because tag is multivalued?
Hi all! I'm trying to run multiple macros in the same search and eventually aggregate the results from each execution into a table. My current search looks like this, which seems to work fine for... See more...
Hi all! I'm trying to run multiple macros in the same search and eventually aggregate the results from each execution into a table. My current search looks like this, which seems to work fine for a single execution of the histperc macro (Prometheus integration provided)   | mstats rate(_value) AS requests WHERE "index"="MyIndex" AND metric_name="MyMetricNameRegex" BY metric_name, le | stats sum(requests) AS total_requests BY metric_name, le | `histperc(0.5, total_requests, le, metric_name)` | rename histperc as Median | table metric_name Median 90th 75th 25th 10th   I think the issue is that total_requests value is not passed down after the | `histperc(0.5, total_requests, le, metric_name)` row but i am not sure if this is the case. Also not sure if rename is by reference or copy and what would eventually happen by having many renames and overrides of the histperc value like below. The histperc macro looks like this:   sort $groupby$, $le$ | eventstats max($hist_rate$) as total_hist_rate, last($le$) as uppermost_bound, count as num_buckets by $groupby$ | eval rank=exact($perc$)*total_hist_rate | streamstats current=f last($le$) as gr, last($hist_rate$) as last_hist_rate by $groupby$ | eval gr=if(isnull(gr), 0, gr), last_hist_rate=if(isnull(last_hist_rate), 0, last_hist_rate) | where $hist_rate$ >= rank | dedup $groupby$ | eval res=case(lower(uppermost_bound) != "+inf" or num_buckets < 2, "NaN", lower($le$) == "+inf", gr, gr == 0 and $le$ <= 0, $le$, true(), exact(gr + ($le$-gr)*(rank - last_hist_rate) / ($hist_rate$ - last_hist_rate))) | fields $groupby$, res | rename res as "histperc"   What i want to do is something like this:   | mstats rate(_value) AS requests WHERE "index"="MyIndex" AND metric_name="MyMetricNameRegex" BY metric_name, le | stats sum(requests) AS total_requests BY metric_name, le | `histperc(0.5, total_requests, le, metric_name)` | rename histperc as Median | `histperc(0.9, total_requests, le, metric_name)` | rename histperc as 90th | `histperc(0.1, total_requests, le, metric_name)` | rename histperc as 10th | `histperc(0.75, total_requests, le, metric_name)` | rename histperc as 75th | `histperc(0.25, total_requests, le, metric_name)` | rename histperc as 25th | table metric_name Median 90th 75th 25th 10th     Thankful for all help!  
I'm struggling to create a search using an inputlookup and multiple NOT searches. Background: I have an inputlookup that is a list of telephone numbers, I want to search my recent telephone log fil... See more...
I'm struggling to create a search using an inputlookup and multiple NOT searches. Background: I have an inputlookup that is a list of telephone numbers, I want to search my recent telephone log files and get a list of entries from that inputlookup that haven't made or received calls. My current query is as a follows:     | inputlookup CUCM_lboro_assigned_numbers_27_6_22.csv | rename DN AS phone | search NOT [ search index=cucm cdrRecordType=1 duration>0 | rename callingPartyNumber AS phone | table phone] AND NOT [ search index=cucm cdrRecordType=1 duration>0 | rename originalCalledPartyNumber AS phone | table phone] AND NOT [ search index=cucm cdrRecordType=1 duration>0 | rename finalCalledPartyNumber AS phone | table phone]     The problem with it is that the three queries are being individually 'search NOT' against the inputlookup, so if a number doesn't place a call (appears as callingPartyNumber), but does receive a call (originalCalledPartyNumber or finalCalledPartyNumber), it still gets listed. I only want to see numbers that haven't made calls AND haven't received calls. It's almost as if I need to build an intermediate data set of numbers that are returned from all three subsearches, then 'search NOT' that against the inputlookup. But I don't know how to do that. Any suggestions?
This is a tip, not a question.  When you have a large solution, you can see on the log data: what the UF name that data comes from, what Index server data are stored on.  What you do not see are w... See more...
This is a tip, not a question.  When you have a large solution, you can see on the log data: what the UF name that data comes from, what Index server data are stored on.  What you do not see are what Heavy Forwarders data are passing trough.  Here is an app that do just that.  Adding an extra field does not use extra license, since only _raw length are calculated. Make an app that you sends to all HF servere: app name: set_name_gateway_hf props.conf (will apply to all data)     [source::...] TRANSFORMS_set_hf_server_name = set_hf_server_name     transforms.conf     [set_hf_server_name] INGEST_EVAL = splunk_hf_name := splunk_server     This will use the Splunk HF server name from etc/system/local/server.conf
I want to configure karate report to see as report in splunk . How can i achieve it
After upgrading to Splunk Enterprise 9.0 I do get the following message from several Dashboard. This dashboard view is deprecated and will be removed in future versions of Splunk software. Open the... See more...
After upgrading to Splunk Enterprise 9.0 I do get the following message from several Dashboard. This dashboard view is deprecated and will be removed in future versions of Splunk software. Open the updated view of this dashboard. If I click the link, it just opens the same Dashboard, expect url is added with this: xmlv=1.1 Example:       https://myserver.com/en-GB/app/Search/test_locations?earliest=-24h%40h&latest=now https://myserver.com/en-GB/app/Search/test_locations?xmlv=1.1&earliest=-24h%40h&latest=now        Have tried to find how to fix the Dashboard, but can not find what to change. Anyone have idea on how to fix this?
I have a dump.json file that collects events in JSON format: {"key":"value","key":"value","key":"value","key":"value"....} I have no problem processing it however each line has 400 Keys and I only... See more...
I have a dump.json file that collects events in JSON format: {"key":"value","key":"value","key":"value","key":"value"....} I have no problem processing it however each line has 400 Keys and I only need 30 of them in splunk. How can I tell the Universal forwarder to only send those 30 fields to my Indexers? Ingesting all the 400 fields consumes a lot of resources and license.
we are looking for APP to get the domain reputation/creation data and if possible the sub domains  what APP is recommended 
Hello, I recently upgraded our deployer/deployment server from 8.1.6 to version 9.0 and when I try to push configuration to our search head cluster i get an error that I have not seen before: [sp... See more...
Hello, I recently upgraded our deployer/deployment server from 8.1.6 to version 9.0 and when I try to push configuration to our search head cluster i get an error that I have not seen before: [splunk@aa130XXXXX bin]$ ./splunk apply shcluster-bundle -target https://aa130XXXXX:8089  Warning: Depending on the configuration changes being pushed, this command might initiate a rolling restart of the cluster members.  Please refer to the documentation for the details. Do you wish to continue? [y/n]: y WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Your session is invalid.  Please login. Splunk username: XXXXX Password:  Error in pre-deploy check, uri=https://aa130XXXXX:8089/services/shcluster/captain/kvstore-upgrade/status, status=401, error=No error Our search head cluster is still on version 8.1.6 Thanks!
How should I specify the bottom value and make the lines not look so flat like this?
indexA field1 field2 field3 A 1 1 A 1 2 A 1 3 A 2 5 B 1 4 B 2 3 B 3 2 C 1 6 C 2 7   ind... See more...
indexA field1 field2 field3 A 1 1 A 1 2 A 1 3 A 2 5 B 1 4 B 2 3 B 3 2 C 1 6 C 2 7   indexB field4 field5 field6 A 1 3 B 2 4 C 1 5 C 1 6   I want to join these 2 indexes by 2 fields (field1=field4 AND field2=field5) Result : field1 field2 field3 field6 A 1 1 3 A 1 2   A 1 3   A 2 5   B 1 4   B 2 3 4 B 3 2   C 1 6 5       6 C 2 7  
We recently rebuilt a server which had splunk UF installed. After the rebuild, the IP remained same but hostname changed. When we reinstalled the UF, pointed to the Deployment Server and added the ... See more...
We recently rebuilt a server which had splunk UF installed. After the rebuild, the IP remained same but hostname changed. When we reinstalled the UF, pointed to the Deployment Server and added the Deployment Client to the serverclass, none of the apps were able to be downloaded due to checksum mismatch error. I tried everything from removing the DC from the serverclass and disabling the deploymentclient config on the DC and even reinstalled the UF but still nothing changed. Deployment server version - 7.3.6 Deployment client - 9.0 can someone please help fix this issue ?
Hi Community, I'm using Splunk Java SDK in my application, this version to be exact:   implementation group: 'com.splunk', name: 'splunk', version: '1.6.5.0'   In the app, I'm trying to get some... See more...
Hi Community, I'm using Splunk Java SDK in my application, this version to be exact:   implementation group: 'com.splunk', name: 'splunk', version: '1.6.5.0'   In the app, I'm trying to get some stats on a metric from Splunk logs.  Here's the native search command in Splunk   `myapp` "Message of interest" | eventstats min(metricOfInterest) as ft_min max(metricOfInterest) as ft_max avg(metricOfInterest) as ft_avg stdev(metricOfInterest) as ft_stdev | fields ft_min, ft_max, ft_avg, ft_stdev   So this query would return a bunch of events and 4 additional fields  ft_min, ft_max, ft_avg, ft_stdev for each event. For the sake of the conversation, let's say there's 200 events matched the search. In my app, the `SplunkResponse` contains 200 Map<String, Object>, each map represents an event. What I want is a single entry that contains only `ft_min, ft_max, ft_avg, ft_stdev`. Right now, I can extract it from an event (among those 200),  but having all events is too verbose and unnecessary.  Is this achievable by twisting the query or using a particular SDK API ? Thanks, Tuan  
I have garbage collection event data in splunk. Below example line: 2022-06-26T21:47:53.142+0000: 8888.588: Total time for which process threads were stopped: 0.0015059 seconds, Stopping threads to... See more...
I have garbage collection event data in splunk. Below example line: 2022-06-26T21:47:53.142+0000: 8888.588: Total time for which process threads were stopped: 0.0015059 seconds, Stopping threads took: 0.0002620 seconds 2022-06-28T23: 2022-06-26T22:47:57.142+0000: 66666.588: Total time for which process threads were stopped: 0.0015059 seconds, Stopping threads took: 0.0002620 seconds 2022-06-28T23: I have to create splunk alert that parses this Java garbage collected data ingested in Splunk and send alert  when the value in the above highlighted log line for seconds highlighted in red is greater than certain threshold. I used splunk to create regex to extract the data (e.g. stopped: 0.0015059 seconds) as new filed.  I choose  auto regex as stopped: 0.0067871 seconds  The regex which was generated is ^(?:[^ \n]* ){9}(?P<pause>[^,]+) When I use the where condition -->pause > 0, no event data is returned. Any idea how to manipulate number inside extracted new field such as above? ...|rex field=_raw ^(?:[^ \n]* ){9}(?P<pause>[^,]+)|where pause > 0 Thanks
all of our stuff is on prem currently our dedicated Deployment Servers also have the Search Head role on them, should they? is there any harm in removing it?  we do have other servers with dedicated... See more...
all of our stuff is on prem currently our dedicated Deployment Servers also have the Search Head role on them, should they? is there any harm in removing it?  we do have other servers with dedicated Search Head roles on them
Hi Team  How to create multiple value in single panel dashboard.     
Hi, I am trying to get a static option that is "All" the individual static options combined.  The mCode field contains different values in different events, and I would like to list all the events ... See more...
Hi, I am trying to get a static option that is "All" the individual static options combined.  The mCode field contains different values in different events, and I would like to list all the events with specific mCode value. when I paste the query into a regular SPL search, I get the correct results, however, in a Dashboard, it tells me "no results found". The token I am using for the static options is mcode, and all the individual static options are working correctly:   <query> | multisearch [ | from datamodel:"model1" ] [ | from datamodel:"model1" ] | fields "Action" "pCode" "mCode" "pCode2" | search Action="*" pCode="$pCode$" pCode2="*" | where mCode IN ("$mCode$") </query>     I tried the following mCode Static option: %  ...  * .... even value1","value2","value3  nothing seems to work in the Dashboard. Any help would be appreciated.
Hi , I need a query for including non business hours and weekends