@bowesmana I'm autogenerating those milliseconds and I can't manipulate them. That's why I'm asking. I know that `earliest` and `latest` should be in seconds, but I have milliseconds as input.
I have a query … index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<requestId>[^:]+)" | table requestId | dedup requestId …that returns 7 records/fields… 92d246dd-7aac-41f7-a398-27586062e4fa ba79...
See more...
I have a query … index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<requestId>[^:]+)" | table requestId | dedup requestId …that returns 7 records/fields… 92d246dd-7aac-41f7-a398-27586062e4fa ba79c6f5-5452-4211-9b89-59d577adbc50 711b9bb4-b9f1-4a2b-ba56-f2b3a9cdf87c e227202a-0b0a-4cdf-9b11-3080b0ce280f 6099d5a3-61fc-418b-87b4-ddc57c482dd6 348fb576-0c36-4de9-a55a-97157b00a304 c34b7b96-094d-45bb-b03d-f9c98a4efd5f …that I then want to use as input for another search on the same index I looked at manual and can see that subsearches are allowed [About subsearches - Splunk Documentation] but when I add my subsearch as input … index=blah [search index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<requestId>[^:]+)" | table requestId | dedup requestId] ..I would have expected at least 7 records to have been returned BUT I do not see any output. There are no syntax issues so can someone explain to me what I’m not seeing/doing? Any help appreciated.
Hi @yh , manually add it and you'll find it. Remember that to see the index field, in the | tstats searches, you have to use the prefix (e.g. Authentication.index). Ciao. Giuseppe
Hi @Hardy_0001 , Splunk team confirmed that is a bug on Splunk version 9.2.0.1. The Splunk Dev team is working on that. We can wait until they release fix version
Limitations of MonitorNoHandle are really significant: <path> must be a fully qualified path name to a specific file. Wildcards
and directories are not accepted. In my situation, it means that I ...
See more...
Limitations of MonitorNoHandle are really significant: <path> must be a fully qualified path name to a specific file. Wildcards
and directories are not accepted. In my situation, it means that I need script-made inputs.conf that will contain hundreds of monitors
hi @gcusello I think that would be useful. I try to add the index field in the data model but seems not able to. I don't see that field in the auto-extracted option. I can see fields like host, s...
See more...
hi @gcusello I think that would be useful. I try to add the index field in the data model but seems not able to. I don't see that field in the auto-extracted option. I can see fields like host, sourcetype being inherited from BaseEvent in the JSON. I am wondering, shall I modify the JSON then? Not sure if that is the right way. Can't see to figure out how to add the index using the data model editor. Thanks again
Hi @yh , you can customize your Data Model adding some fields (e.g. I usually add also the index) following you requisites, but don't duplicate them! Ciao. Giuseppe
Hi @rickymckenzie10, at first this isn't a question for the Community but for a Splunk PS or Splunk Certified Architect! Anyway, if you have data that exceed the retention period, it means that in ...
See more...
Hi @rickymckenzie10, at first this isn't a question for the Community but for a Splunk PS or Splunk Certified Architect! Anyway, if you have data that exceed the retention period, it means that in the same bucket you have events that are still in the retention period and for this reason the bucket isn't discarded. I don't like to change the default indexes parameters. But you reached the max dimension of some of your indexes and for this reason some of them will be discarded in short time. What's you issue: that there are events that exceed the retention period without discarding of that you reached the max dimension? In the first case, you have only to wait , in the second case, you have to enlarge the index max dimension. I don't see any configuration issues, maybe the maxWarmDbCount is high. Ciao. Giuseppe
You just use <dashboard version="1.1" script="simple_xml_examples:table_icons_inline.js"> It definitely works for script - I've never used it with css, but I assume that will work too. NB: If you...
See more...
You just use <dashboard version="1.1" script="simple_xml_examples:table_icons_inline.js"> It definitely works for script - I've never used it with css, but I assume that will work too. NB: If you are on Victoria Cloud, you can upload your own apps containing JS and CSS, as long as they have gone through the appinspect process.
@whitecat001 The best starting point is to view the KV store events from the Monitoring console. Then look for events that correspond to any issues and build alerts based on it. Below is a sam...
See more...
@whitecat001 The best starting point is to view the KV store events from the Monitoring console. Then look for events that correspond to any issues and build alerts based on it. Below is a sample query you can use to view health status of KV Stores. Alert on health_info -> red |rest /services/server/info
|eval a=now()
|eval time=strftime(a,"%Y-%m-%d %H:%M:%S")
|table time host kvStoreStatus author health_info isForwarding server_roles
|sort host If the reply helps, a karma upvote would be appreciated.
If you get Limit of '100', you must be picking mind-reading #1 as you didn't pick #2. That is just the problem with list. You can increase this limit somewhat (see [stats|sistats]). But be very ca...
See more...
If you get Limit of '100', you must be picking mind-reading #1 as you didn't pick #2. That is just the problem with list. You can increase this limit somewhat (see [stats|sistats]). But be very careful. As to BATCH_ID, I still don't know what 134 and 26 mean. One correlationId? All events? Is it because of the 100 limit on list_maxsize? You should probably post a new question with proper set up and detailed explanation.
I'm using Splunk Cloud so cannot upload css and js files as a self service thing. When I use <dashboard version="1.1" script="table_icons_inline.js" stylesheet="table_decorations.css"> from wi...
See more...
I'm using Splunk Cloud so cannot upload css and js files as a self service thing. When I use <dashboard version="1.1" script="table_icons_inline.js" stylesheet="table_decorations.css"> from within the Dashboard Examples app (simple_xml_examples) it works fine. My question is that I want to use those js and css files from Dashboard Example apps in other apps too. I believe there should be some way to add those files providing relative paths. Like: <dashboard version="1.1" script="apps/simple_xml_examples/appserver/static/table_icons_inline.js" stylesheet="apps/simple_xml_examples/appserver/static/table_decorations.css"> Any ideas??
I am facing the same issue while using a scripted input. Did you find any ways to identify the root cause and fix it ? We are receiving data from a scripted input. we also tried putting that data in ...
See more...
I am facing the same issue while using a scripted input. Did you find any ways to identify the root cause and fix it ? We are receiving data from a scripted input. we also tried putting that data in a csv file which has all the data. but still we are observing issues with data missing
The vector Splunk_hec_log [1] support compression algorithms gzip,snappy,zlib and zstd. It seems the server splunk HEC only supports gzip(I am using docker.io/splunk/splunk 9.2). Does splunk HEC ...
See more...
The vector Splunk_hec_log [1] support compression algorithms gzip,snappy,zlib and zstd. It seems the server splunk HEC only supports gzip(I am using docker.io/splunk/splunk 9.2). Does splunk HEC support snappy,zlib or zstd? Is this possible to enable this algorithms beside of gzip? [1] https://vector.dev/docs/reference/configuration/sinks/splunk_hec_logs/#compression
Ah, so I missed your point somewhat in that the list contains all the values you want and you should alert if one is missing from the data. You can do this | stats count by Time Value
| append [
...
See more...
Ah, so I missed your point somewhat in that the list contains all the values you want and you should alert if one is missing from the data. You can do this | stats count by Time Value
| append [
| inputlookup lookup.csv
``` Filter the entries you expect here, e.g. using addinfo ```
``` | where Time is in the range you want ```
]
| stats count by Time Value
| where count=1 which is adding the rows from the lookup to the end of your found data and then doing the stats count again. If count=1 then it has only come from the lookup. The filtering (where...) will need to work out what time range your search covers. Use the addinfo command to get info_min_time and info_max_time fields which you can then use to filter those values from the lookup you want.