Activity Feed
- Got Karma for Re: consequence of lookup command with or without OUTPUT parameter. 05-03-2024 12:42 AM
- Got Karma for Re: What exactly is a tsdix file?. 05-22-2023 09:41 PM
- Got Karma for Re: How to extract only the first three octets of the IP address instead of the whole address?. 04-05-2023 02:54 AM
- Got Karma for Re: inputs.conf entry to get linux operating system name and version. 03-01-2023 03:49 PM
- Got Karma for Re: ndex clustering question (troubleshooting sbux)..Missing={ default:1 }. 01-06-2023 10:16 AM
- Got Karma for Re: What exactly is a tsdix file?. 12-12-2022 02:45 PM
- Got Karma for Re: Is it possible to add a default value for a lookup without match?. 11-17-2022 10:45 AM
- Got Karma for Re: Problem parsing indexes.conf: Cannot load IndexConfig: .... 08-03-2022 04:21 AM
- Got Karma for Re: How to extract only the first three octets of the IP address instead of the whole address?. 02-11-2022 04:23 AM
- Got Karma for Re: How to extract only the first three octets of the IP address instead of the whole address?. 02-07-2022 09:56 AM
- Got Karma for Re: What exactly is a tsdix file?. 11-06-2021 10:09 PM
- Posted Re: Azure VM - Won't report to Deployment server on Getting Data In. 11-03-2021 12:22 PM
- Posted Re: Assistance creating alerts on Alerting. 09-15-2021 03:25 PM
- Posted Re: Assistance creating alerts on Alerting. 09-15-2021 03:10 PM
- Got Karma for Re: Read from inputs.conf file. 09-13-2021 05:11 PM
- Posted Re: Read from inputs.conf file on Getting Data In. 09-13-2021 04:13 PM
- Posted Re: Capabilities for edit_tcp with admin role on Splunk Enterprise. 09-13-2021 01:40 PM
- Posted Re: Error in 'lookup' command: Must specify one or more lookup fields. on Knowledge Management. 09-13-2021 10:55 AM
- Posted Re: Error in 'lookup' command: Must specify one or more lookup fields. on Knowledge Management. 09-13-2021 10:39 AM
- Posted Re: Error in 'lookup' command: Must specify one or more lookup fields. on Knowledge Management. 09-13-2021 10:25 AM
Topics I've Started
11-03-2021
12:22 PM
Glad to hear you got it resolved. Sounds like this could qualify as a connectivity issue. 🙂
... View more
09-15-2021
03:25 PM
There's also a couple of free fundamentals training classes available here, fyi.
... View more
09-15-2021
03:10 PM
I would probably suggest starting with the Splunk Security Essentials app to discover use cases you can meet in your environment with the data you have ingested in Splunk. It is well documented and should be very helpful, especially since you have limited Splunk experience. You might also find some nuggets here.
... View more
09-13-2021
04:13 PM
1 Karma
Not sure I understand your requirement. If your app is local to the search head (inputs usually don't live on search heads unless you have a single-server deployment), you can possibly use the REST API to get the configured value for the host property. Otherwise, you can dynamically populate a dropdown dashboard element by running a search to look for unique host values for the source type in question. But I said, I probably don't understand what you are trying to achieve.
... View more
09-13-2021
01:40 PM
You may want to read up on the HTTP Event Collector if you are looking to push data to Splunk without using a forwarder. Data is never sent to all indexes. You either need to be specific and specify an index (recommended) or data will be indexed into the configured default index.
... View more
09-13-2021
10:55 AM
You need to ensure that the inputlookup subsearch returns a field called "Rule", not CVE. The field/column you want to match in your lookup is named "CVE Number", so you need to rename that to "Rule" for the NOT condition to work against your events. NOT [|inputlookup ignore_cve.csv | rename "CVE Number" as Rule | fields Rule]
... View more
09-13-2021
10:39 AM
OK, so rename your field from the lookup to "Rule": <yourSearch> NOT [|inputlookup ignore_cve.csv | rename "CVE Number" as Rule | fields Rule]
... View more
09-13-2021
10:25 AM
What is the field name of the CVE in your event results?
... View more
09-13-2021
10:12 AM
What field name from this search do you want to match against which field name in the lookup? In general: <yourSearchReturning <matchField>> NOT [| inputlookup lookupfile.csv | fields <fieldToMatch> | rename <fieldToMatch> as <matchField>] Of course you only have to rename if the field name in your event search is different from the one in the lookup file.
... View more
09-13-2021
09:33 AM
Try adding quotes to that field "CVE Number" name or remove the space from the csv header? Without seeing the exact layout of your csv file and the current query, it's difficult to provide more advice. Maybe share header row and first couple of lines of your csv?
... View more
09-09-2021
12:24 PM
You probably want something like index=xyz NOT [ inputlookup ignore_cve.csv ] | table .... lookup is intended for 'translating' things, like key in, value out.
... View more
09-09-2021
12:07 PM
1 Karma
OK, this should work in props.conf on your UF: [mySourcetypeNameDontUseCSV]
SHOULD_LINEMERGE=false
NO_BINARY_CHECK=true
INDEXED_EXTRACTIONS=csv
FIELD_NAMES=SQLFIELD,Field1,Field2,Field3,Field4
TIME_PREFIX=@\^@\^
TIME_FORMAT=%Y-%m%d@^@^%H:%M:%S Change the sourcetype name such that it matches what you set in inputs.conf for the monitor stanza. Again, make it a descriptive name rather than generic "csv", depending on what type of log data this is (e.g. sql:querylog or somesuch). Always good to be explicit and descriptive. Docs for timestamp recognition
... View more
09-09-2021
10:44 AM
Assuming it's the first date/time value in the event you want to use as _time, and we just name fields according to your column names, this should work: [mySourcetypeNameDontUseCSV]
FIELD_NAMES = A,B,C,D,E
INDEXED_EXTRACTIONS = csv
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false If this file is read by a UF, this props.conf entry must be placed on the UF itself, since you intend to use indexed extractions. If you want a different part of the event for the timestamp, or if you just want to use index time, you'll need a couple more things.
... View more
09-09-2021
10:32 AM
Couple of questions: - Where is this csv file picked up from, i.e. is it read by a UF? - What field names do you expect to use? - Which part of the first column do you want to be used as the event timestamp (there are multiple TS values)?
... View more
09-08-2021
10:30 AM
You don't need the table command at the end for the visualization; remove it and it should work.
... View more
09-08-2021
10:12 AM
Can you provide an anonymized sample event? Also, what IS the unexpected result you are getting?
... View more
08-27-2021
11:57 AM
Try something like this: | eval current_hour=strftime(now(), "%I") | eval dh=current_hour-1 | where date_hour=dh | timechart span=1w count Obviously, you'll need to handle the special case of midnight. Also, this is still pretty inefficient and can definitely be improved not to read all events only to filter out the hour in question, but it should show how to make the date_hour dynamic. I am not a search expert by any means, so there maybe an easier way to do this. I would consider a summary index that is populated hourly/nightly and base your by-hour reporting on that.
... View more
08-27-2021
11:34 AM
Have you taken a look at $SPLUNK_HOME/var/log/splunk/splunkd.log to get any hints as to what may be happening?
... View more
08-27-2021
11:14 AM
You can use time modifiers in your search (docs😞 index=text source=text date_hour=14 earliest=-6w
| timechart span=1h count Will give you the last 6 days. If this is something you want to run regularly, you could create a saved search that runs nightly (or weekly), counts what you want to count and store it in a summary index. Your alert search will run much faster that way. It may also be possible to rewrite your search using tstats if you only need to look at metadata fields (like time). date_hour is a search time extraction, so you couldn't use that with tstats. You can also look at using INGEST_EVAL to create an indexed field that contains the hour component of your timestamp at ingest time. Searches that can use just indexed fields are much more efficient and if this is not a one-off use case, the extra work at ingest time will be worth the benefits.
... View more
08-26-2021
03:51 PM
Can you do an nslookup on your splunkcloud.com hostname and confirm that the IP address is the same as the one reported in your connection timeout? I haven't run across this before. Typically, podman (and docker) should pick up system-wide proxy settings without a problem. You can try to add the proxy variables to the env_file of your SC4S container. Get the current HTTPS_PROXY environment variable setting from your host and copy/paste into env_file. Restart/Retry.
... View more
08-25-2021
11:19 AM
Hi Ema, I understand your concern. I have just tried and reproduced your error on a clean Debian 10 install. I went ahead and filed it as an issue.
... View more
08-24-2021
11:33 AM
1 Karma
The Java SDK has a class named "Settings" with methods to retrieve host and port values, among other settings. Or use REST directly, as Rich said. Many ways to skin a cat. 🙂
... View more
08-24-2021
10:52 AM
I suspect this has something to do with the warning you received earlier in the logs: DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality.
... View more
08-24-2021
10:44 AM
This would have been better asked in a new question instead of as a reply to an 11 year old one... 😉 You can use a REST API call to get that info programmatically, here's a search example: | rest /services/server/settings | fields *Port *port The Java SDK has a class named "Settings" with methods to retrieve host and port values, among other settings.
... View more
08-24-2021
10:30 AM
1 Karma
Agree with the HEC recommendation. Start with docs here. Note that you will use a different url/port then the one you use to access the Splunk UI. Your admin will have to configure the HEC listener on the indexer side and provide you with a load-balanced VIP address to use on your client. The default port is 8088, but it may be different depending on what your Splunk admin configured. Assuming an existing Splunk indexing environment has been configured at your company already, you can instead go the file monitoring route and send data to indexers using the Splunk-to-Splunk protocol with the universal forwarder configured to talk to your company's indexers (outputs.conf). Again, you wouldn't use your URL/port for the search head. By default, indexers will listen on port 9997 for data from forwarders (unless your admin set it up differently). BTW, file monitoring by the UF is a continuous process (no periodic polling per-se) and provides you with a certain amount of resiliency against transient failure conditions you would otherwise need to handle in your application script, if going the HEC route. On top of that, the UF can also monitor files and send it over HTTP in latest product releases (httpout), so a combination of both methods is possible.
... View more