All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK. Because I think you might be misunderstanding something. CIM is just a definition of fields which should be either present directly in your events or defined as calculated fields or automatic lo... See more...
OK. Because I think you might be misunderstanding something. CIM is just a definition of fields which should be either present directly in your events or defined as calculated fields or automatic lookups. So the way to go would be not to fiddle with the definition of the datamodel to fit the data but rather the other way around - modify the data to fit the datamodel). There is already a good candidate for the "location" field I showed already - the dvc_zone field - you can either fill it in search time or during index-time. Or even set it "statically" on the input level by using the _meta option.
You could output their choices to a csv store - these can be made user specific with the create_context argument outputlookup - Splunk Documentation
If it works then it is OK
As I wrote before - "I assume you checked the name for this particular Event Log (the name of the stanza must match the "Full Name" property from the EventLog properties page)" Especially the par... See more...
As I wrote before - "I assume you checked the name for this particular Event Log (the name of the stanza must match the "Full Name" property from the EventLog properties page)" Especially the part in the parentheses is important. And yes, naming of the Event Logs can be a bit confusing sometimes. (You can of course get the Event Log name with a quick PowerShell as well without the need to click through the Event Viewer).
After the bin command, period_start will be an epoch (unix) time aligned to the start of the hour. In order to get a match, you should parse / reformat / convert the time from your lookup into a simi... See more...
After the bin command, period_start will be an epoch (unix) time aligned to the start of the hour. In order to get a match, you should parse / reformat / convert the time from your lookup into a similarly aligned unix time. Then the stats command can match against the time and the value
我想使用 syslog-ng 將資料從通用轉寄器輸入到我的搜尋頭 我將使用 TCP,但我不知道哪裡出了問題,我無法在搜索頭中顯示我的數據 這是我的syslog-ng splunk.conf       template syslog { template("${DATE} ${HOST} ${MESSAGE}\n"); }; rewrite rewrite_str... See more...
我想使用 syslog-ng 將資料從通用轉寄器輸入到我的搜尋頭 我將使用 TCP,但我不知道哪裡出了問題,我無法在搜索頭中顯示我的數據 這是我的syslog-ng splunk.conf       template syslog { template("${DATE} ${HOST} ${MESSAGE}\n"); }; rewrite rewrite_stripping_priority { subst("^\<\\d+>", "", value(MESSAGE)); }; source src_udp_514 { udp(ip("0.0.0.0") so_rcvbuf(16777216) keep_timestamp(yes) flags(no-parse)); }; destination dest_tcp_10001 { tcp("127.0.0.1" port(10001) template("syslog")); }; filter f_linux_server { netmask(172.18.0.8/32) }; destination dest_tcp_10002 { tcp("127.0.0.1" port(10002) template("syslog")); }; filter f_linux_server2 { netmask(172.18.0.9/32) }; log { source(src_udp_514); rewrite(rewrite_stripping_priority); if (filter(f_linux_server)) { destination(dest_tcp_10001); } elif (filter(f_linux_server2)) { destination(dest_tcp_10002); }; };       i also already set tcp 10001 and 10002 on my universal forwarder      
Dear Splunk Community, I am currently creating a Splunk dashboard and would like to save user-defined filters in the dashboard, even after Splunk has been reopened.  Background: I have a table on a... See more...
Dear Splunk Community, I am currently creating a Splunk dashboard and would like to save user-defined filters in the dashboard, even after Splunk has been reopened.  Background: I have a table on a layer which data comes from SAP to Splunk via Push Extractor. Not all of the data displayed in the table is relevant, so I want to hide certain rows using a dropdown field/checkbox, which are then no longer included in the visualizations on the other layers. how can I ensure that these hidden rows are no longer included in the visualizations for all users of the dashboard? how can I ensure that the filter settings remain visible even after the dashboard has been closed for >24 hours? I really hope that someone can help me with this. Kind regards Julian
@bowesmana I'm autogenerating those milliseconds and I can't manipulate them. That's why I'm asking. I know that `earliest` and `latest` should be in seconds, but I have milliseconds as input.
I have a query … index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<requestId>[^:]+)" | table requestId | dedup requestId …that returns 7 records/fields… 92d246dd-7aac-41f7-a398-27586062e4fa ba79... See more...
I have a query … index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<requestId>[^:]+)" | table requestId | dedup requestId …that returns 7 records/fields… 92d246dd-7aac-41f7-a398-27586062e4fa ba79c6f5-5452-4211-9b89-59d577adbc50 711b9bb4-b9f1-4a2b-ba56-f2b3a9cdf87c e227202a-0b0a-4cdf-9b11-3080b0ce280f 6099d5a3-61fc-418b-87b4-ddc57c482dd6 348fb576-0c36-4de9-a55a-97157b00a304 c34b7b96-094d-45bb-b03d-f9c98a4efd5f …that I then want to use as input for another search on the same index I looked at manual and can see that subsearches are allowed [About subsearches - Splunk Documentation] but when I add my subsearch as input … index=blah [search index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<requestId>[^:]+)" | table requestId | dedup requestId] ..I would have expected at least 7 records to have been returned BUT I do not see any output. There are no syntax issues so can someone explain to me what I’m not seeing/doing? Any help appreciated.
Hi @yh , manually add it and you'll find it. Remember that to see the index field, in the | tstats searches, you have to use the prefix (e.g. Authentication.index). Ciao. Giuseppe
Hi @Hardy_0001 , Splunk team confirmed that is a bug on Splunk version 9.2.0.1. The Splunk Dev team is working on that. We can wait until they release fix version  
Limitations of MonitorNoHandle are really significant: <path> must be a fully qualified path name to a specific file. Wildcards and directories are not accepted. In my situation, it means that I ... See more...
Limitations of MonitorNoHandle are really significant: <path> must be a fully qualified path name to a specific file. Wildcards and directories are not accepted. In my situation, it means that I need script-made inputs.conf that will contain hundreds of monitors
hi @gcusello  I think that would be useful. I try to add the index field in the data model but seems not able to. I don't see that field in the auto-extracted option. I can see fields like host, s... See more...
hi @gcusello  I think that would be useful. I try to add the index field in the data model but seems not able to. I don't see that field in the auto-extracted option. I can see fields like host, sourcetype being inherited from BaseEvent in the JSON. I am wondering, shall I modify the JSON then? Not sure if that is the right way. Can't see to figure out how to add the index using the data model editor. Thanks again
Thanks. That worked
Hi @yh , you can customize your Data Model adding some fields (e.g. I usually add also the index) following you requisites, but don't duplicate them! Ciao. Giuseppe
Hi @rickymckenzie10, at first this isn't a question for the Community but for a Splunk PS or Splunk Certified Architect! Anyway, if you have data that exceed the retention period, it means that in ... See more...
Hi @rickymckenzie10, at first this isn't a question for the Community but for a Splunk PS or Splunk Certified Architect! Anyway, if you have data that exceed the retention period, it means that in the same bucket you have events that are still in the retention period and for this reason the bucket isn't discarded. I don't like to change the default indexes parameters. But you reached the max dimension of some of your indexes and for this reason some of them will be discarded in short time. What's you issue: that there are events that exceed the retention period without discarding of that you reached the max dimension? In the first case, you have only to wait , in the second case, you have to enlarge the index max dimension. I don't see any configuration issues, maybe the maxWarmDbCount is high. Ciao. Giuseppe
You just use  <dashboard version="1.1" script="simple_xml_examples:table_icons_inline.js"> It definitely works for script - I've never used it with css, but I assume that will work too. NB: If you... See more...
You just use  <dashboard version="1.1" script="simple_xml_examples:table_icons_inline.js"> It definitely works for script - I've never used it with css, but I assume that will work too. NB: If you are on Victoria Cloud, you can upload your own apps containing JS and CSS, as long as they have gone through the appinspect process.
Yes, I'm customizing the login screen for Splunk Enterprise, not for Splunk Cloud. I'm ignoring these failures. Thanks. @richgalloway 
@whitecat001  The best starting point is to view the KV store events from the Monitoring console. Then look for events that correspond to any issues and build alerts based on it. Below is a sam... See more...
@whitecat001  The best starting point is to view the KV store events from the Monitoring console. Then look for events that correspond to any issues and build alerts based on it. Below is a sample query you can use to view health status of KV Stores. Alert on health_info -> red |rest /services/server/info |eval a=now() |eval time=strftime(a,"%Y-%m-%d %H:%M:%S") |table time host kvStoreStatus author health_info isForwarding server_roles |sort host  If the reply helps, a karma upvote would be appreciated.
@bowesmana  thanks . its helpful