All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dear Karma,   We tried to use the suggested option. Can you please guide us where to update the file as we suspect on location where we writing Regex. Currently, we have updated windows folder on... See more...
Dear Karma,   We tried to use the suggested option. Can you please guide us where to update the file as we suspect on location where we writing Regex. Currently, we have updated windows folder on deployment server and /etc/system/local/ directory on HF level. Thanks, Suraj
There are some fields which are always present - source, sourcetype, host, _raw, _time (along with some internal Splunk's fields). But they each have their own meaning and you should be aware of the ... See more...
There are some fields which are always present - source, sourcetype, host, _raw, _time (along with some internal Splunk's fields). But they each have their own meaning and you should be aware of the consequences if you want to fiddle with them. In your case you could most probably add a field matching the appropriate CIM field (for example the dvc_zone). It could be a calculated field (evaluated by some static lookup listing your devices and associating them with zones) or (and that's one of the cases where indexed fields are useful) an indexed field, possibly added at the initial forwarder level.
The search might get cancelled if - for example - your user exceeds resource limits. If you are trying to reproduce the issue with user with another role having differently set limits (or not having ... See more...
The search might get cancelled if - for example - your user exceeds resource limits. If you are trying to reproduce the issue with user with another role having differently set limits (or not having limits at all) you might not hit the same restrictions.
Yes, certain source it's a bit hard for me to override the source name, I will try to see what can be done. I was looking at source as it's one of the few fields that seems to be common across multi... See more...
Yes, certain source it's a bit hard for me to override the source name, I will try to see what can be done. I was looking at source as it's one of the few fields that seems to be common across multiple models, eg network, authentication, change etc
Well, see into CIM definition and check which fields might be relevant to your use case. "zone" is a relatively vague term and can have different meanings depending on context. For example, the Net... See more...
Well, see into CIM definition and check which fields might be relevant to your use case. "zone" is a relatively vague term and can have different meanings depending on context. For example, the Network Traffic has three different "zone fields" src_zone, dest_zone and dvc_zone Of course filtering by source field is OK but it might not contain the thing you need.
We have recently migrated to smart store post migration SF and RF are not met. Can anyone help me with the  troubleshooting steps.
Join command (which should not really be used unless as a rule of thumb; unless there is a very good reason for it and there is no other way) uses two searches. So the .csv file you're talking about ... See more...
Join command (which should not really be used unless as a rule of thumb; unless there is a very good reason for it and there is no other way) uses two searches. So the .csv file you're talking about must be referenced somehow withihn such search. You can't just search from a file.
It might be true. There are two important things here. 1. You can only distribute _apps_ from the deployment server. So the apps get pulled by the deployment client (in this case your UF) and put in... See more...
It might be true. There are two important things here. 1. You can only distribute _apps_ from the deployment server. So the apps get pulled by the deployment client (in this case your UF) and put into your $SPLUNK_HOME/etc/apps directory 2. Splunk builds the "effective config" by layering all relevant config files according to precedence rules described here https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles So depending on where the install wizard puts those settings, you might or might not be able to overwrite them with an app deployed from DS. You can check where the settings are stored by running splunk btool inputs list --debug This will show you effective config entries along with the file in which it is defined. If it's in some app/something/... file, it can be overwritten, possibly with some clever app naming in order to put the config file alphabetically before/after another app. But if it's in system/local... you can't overwrite it with an app. And that's why it's not advisable to put settings in system/local unless you're really really sure you want to do that. If you put settings there you can't overwrite it in any way later unless you manually edit the system/local/whatever.conf file on that particular Splunk component (ok, there is an exception for clustered indexers, but that's for another time).
Hey @PickleRick  2. You are absolutely right. I just tried with different users on the same accelerated model, same query but different roles, and the restricted users has much less results. So... See more...
Hey @PickleRick  2. You are absolutely right. I just tried with different users on the same accelerated model, same query but different roles, and the restricted users has much less results. So, can I say the way forward seems to be one common data model then? Is there any recommended or easy way to perform filtering between Zones in a summary search for example?  Is using Where source=ZoneA* alright then?
After encoding it does run. But there is no results I do get results for queries without eval and URL-encoding.
OK. There are additional things to consider here. 1. Datamodel is not the same as datamodel accelerated summary. If you just search from a non-accelerated datamodel, the search is "underneath" trans... See more...
OK. There are additional things to consider here. 1. Datamodel is not the same as datamodel accelerated summary. If you just search from a non-accelerated datamodel, the search is "underneath" translated by Splunk to a normal search according to the definition of the dataset you're searching from. So all role-based restrictions apply. 2. As far as I remember (but you'd have to double-check it), even if you search from accelerated summaries, the index-based restrictions should still be in force because the accelerated summaries are stored along with normal event buckets in the index directory and are tied to the indexes themselves. 3. And because of that exactly the same goes for retention periods. You can't have an accelerated summary retention period longer than the events retention period since the accelerated summaries would get rolled to frozen witht the bucket the events come from. So there's more to it than meets the eye.
If this doesn't work, you could try using CSS with the token value
If your time periods are always 1 hour, you only need the start time and you can bin / bucket _time with span=1h, this gives you a time you can match on as well as your values. <your index> | bin _t... See more...
If your time periods are always 1 hour, you only need the start time and you can bin / bucket _time with span=1h, this gives you a time you can match on as well as your values. <your index> | bin _time as period_start span=1h | dedup period_start Value | eval flag = 1 | append [| inputlookup lookup.csv [ eval period_start = ``` convert your time period here ``` | eval flag = 2] | stats sum(flag) as flag by period_start Value ``` flag = 1 if only in index, 2 if only in lookup, or 3 if in both ``` | where flag = 2
Hi Splunkers, I have a question about a possible issues on UF management via Deploymet Server. On a customer env, some UFs have been installed on Windows server. They send data to a dedicated HF. No... See more...
Hi Splunkers, I have a question about a possible issues on UF management via Deploymet Server. On a customer env, some UFs have been installed on Windows server. They send data to a dedicated HF. Now, we want manage them with a Deployment Server. The point is this: those UF have been installed with graphical wizard. During this installation, it has been set which data to collect and to send to HF. So, the inputs.conf has been set during this phase, in a GUI manner. Now, in a Splunk course material (I don't remeber one, it should be the Splunk Enterprise Admin one), I got this warning: if inputs.conf, for Windows UF, are set with graphical wizard like in our case, Deployment Server could get some problems in interact with them, even it could not be able to manage them. Is this confirmed? Do you know in which section on documentation I can find evidence of this?
You could record which events have triggered an alert and when it was triggered in a summary index or keystore/csv and remove these from the subsequent set of results is within 24 hours.
Thanks for the hints. In terms of data retention all the sections will have similar policy. However, access grants can be an issue. In my use case, the dashboards will be monitored by section p... See more...
Thanks for the hints. In terms of data retention all the sections will have similar policy. However, access grants can be an issue. In my use case, the dashboards will be monitored by section personnel and also by the SOC. Therefore in terms of access, SOC will be able to see DMZ, ZoneA and ZoneB while the respective members of each section should only be able to see their zones (need-to-know basis policy) At the moment I am using different indexes so I can perform some transforms specific to each zones, as the syslog log sending formats are different due to the different log aggregator used by each zones. By using the different indexes in the heavy forwarder, I am able to perform some SED for particular log sources, and host & source override on the HF. I remember that I can limit access based on indexes, but I guess this is not possible with data models but will this be a concern? If I put them all in a data model, is it still possible to restrict access? For example, if the user can only manipulate views from dashboard and not be able to run searches themselves, that will still be OK. Pros and Cons in my mind: Separate data model: - Pro's: I can easily segregate the tstats query - Cons: Might be difficult to get an overview stats need to use appends and maintain each additional new zone. Each new data model will need to run periodically and increase the number of scheduled accelerations? Integrated data model: - Cons: might be harder to filter, eg between ZoneA, ZoneB and DMZ. Seems like I can filter only based on the few parameters in the model, eg source, host - Pros: Easier to maintain, as just need to add new indexes into the data model whitelist. Limit the number of Scheduled runs. - And as mentioned the point on data access? Will it be still possible to restrict? I am still quite new to Splunk so some of my thoughts might be wrong. Open to any advice, still in a conundrum. 
Hello,  The time is created from the below script | bin _time span=60 | eval Time1=strftime(_time+3600,"%A %H") | eval eventstart = strftime(_time, "%H") | eval eventend=01 | eval eventrange = ... See more...
Hello,  The time is created from the below script | bin _time span=60 | eval Time1=strftime(_time+3600,"%A %H") | eval eventstart = strftime(_time, "%H") | eval eventend=01 | eval eventrange = eventstart+(eventend) | eval eventrange=if(eventrange=-1, 23,eventrange ) | replace 0 with 00, 1 with 01, 2 with 02, 4 with 04, 5 with 05, 6 with 06, 7 with 07, 3 with 03, 9 with 09, 8 with 08 | eval Time2 = Time1.": [".eventstart."00 - ".eventrange."00] " So, the TIME format actually is Day TIME_TOHour [TIME_FROM - TIME_TO]
The .csv is actually used with”join”. However my question is related more to just finding a file, whether lookup or input. I don’t know if the .csv is an output of some other search, script or a file... See more...
The .csv is actually used with”join”. However my question is related more to just finding a file, whether lookup or input. I don’t know if the .csv is an output of some other search, script or a file loaded into splunk. Is there a way to find where it comes from if I know nothing but its name?
Hi @mythili, there's a timeout for a search execution. In addition you probably are an andmin and the user has different search settings. So you could hint to your colleague to run less contempora... See more...
Hi @mythili, there's a timeout for a search execution. In addition you probably are an andmin and the user has different search settings. So you could hint to your colleague to run less contemporary searches. I don't hint to give an higher number of executable searches to that role because you could have performance issues. You culd aso hint to your colleague to run this search in background, in this way he/she is secure that the search doesn't go in timeout. ciao. Giuseppe
Hello, first of all, sorry for my bad English, I hope you can understand everything. My goal is to get the journald logs from the universalforwarder in JSON format to Splunk. (Splunk/UF Version 9.1... See more...
Hello, first of all, sorry for my bad English, I hope you can understand everything. My goal is to get the journald logs from the universalforwarder in JSON format to Splunk. (Splunk/UF Version 9.1.2) I use the app jorunald_input. inputs.conf (UF)     [journald://sshd] index = test sourcetype = test journalctl-filter = _SYSTEMD_UNIT=sshd.service       I've tried different props.conf functions. For example, something like this:  props.conf (UF)     [test] INDEXED_EXTRACTIONS = json KV_MODE = json SHOULD_LINEMERGE=false #INDEXED_EXTRACTIONS =json #NO_BINARY_CHECK=true #AUTO_KV_JSON = true       On the UF I check with the command     ps aux | grep journalctl     whether the query is enabled. It displays this command     journalctl -f -o json --after-cursor s=a12345ab1abc12ab12345a01f1e920538;i=43a2c;b=c7efb124c33f43b0b0142ca0901ca8de;m=11aa0e450a21;t=233ae3422cd31;x=00af2c733a2cdfe7 _SYSTEMD_UNIT=sshd.service -q --output-fields PRIORITY,_SYSTEMD_UNIT,_SYSTEMD_CGROUP,_TRANSPORT,_PID,_UID,_MACHINE_ID,_GID,_COMM,_EXE,MESSAGE     I can try it out by using this command in the cli But I have to take out that part "--after-cursor ...." So I run the following command on the CLI to keep track of the journald logs:     journalctl -f -o json _SYSTEMD_UNIT=sshd.service -q --output-fields PRIORITY,_SYSTEMD_UNIT,_SYSTEMD_CGROUP,_TRANSPORT,_PID,_UID,_MACHINE_ID,_GID,_COMM,_EXE,MESSAGE     On the Universal forwarder, the tracked journald logs will then look like this:  (It would be a nice JSON format)     { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2c;b=a1aaa111a11aaa111aa000a0101;m=11aa00c5b9a0;t=233ae39a37aa2;x=00af2c733a2cdfe7", "__REALTIME_TIMESTAMP" : "1710831664593570", "__MONOTONIC_TIMESTAMP" : "27194940570016", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "Invalid user asdf from 111.11.111.111 port 111", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2d;b=a1aaa111a11aaa111aa000a0101;m=11aa00ec25bf;t=233ae39c9e6c0;x=10ac2c735c2cdfe7", "__REALTIME_TIMESTAMP" : "1710831667111616", "__MONOTONIC_TIMESTAMP" : "27194943088063", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "pam_unix(sshd:auth): check pass; user unknown", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2e;b=a1aaa111a11aaa111aa000a0101;m=11aa00ec278a;t=233ae39c9e88c;x=5fb4c21ae6130519", "__REALTIME_TIMESTAMP" : "1710831667112076", "__MONOTONIC_TIMESTAMP" : "27194943088522", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2f;b=a1aaa111a11aaa111aa000a0101;m=11aa0108f5bf;t=233ae39e6b6c0;x=d072e90acf887129", "__REALTIME_TIMESTAMP" : "1710831668999872", "__MONOTONIC_TIMESTAMP" : "27194944976319", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "Failed password for invalid user asdf from 111.11.111.111 port 111 ssh2" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a30;b=a1aaa111a11aaa111aa000a0101;m=11aa010e0295;t=233ae39ebc397;x=d1eb29e00003daa7", "__REALTIME_TIMESTAMP" : "1710831669330839", "__MONOTONIC_TIMESTAMP" : "27194945307285", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "pam_unix(sshd:auth): check pass; user unknown", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a31;b=a1aaa111a11aaa111aa000a0101;m=11aa012f0b3c;t=233ae3a0ccc3e;x=c33e28a6111c89ea", "__REALTIME_TIMESTAMP" : "1710831671495742", "__MONOTONIC_TIMESTAMP" : "27194947472188", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "Failed password for invalid user asdf from 111.11.111.111 port 111 ssh2" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a32;b=a1aaa111a11aaa111aa000a0101;m=11aa0135591b;t=233ae3a131a1d;x=45420f6d2ca07377", "__REALTIME_TIMESTAMP" : "1710831671908893", "__MONOTONIC_TIMESTAMP" : "27194947885339", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "PRIORITY" : "3", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "error: Received disconnect from 111.11.111.111 port 111:11: Unable to authenticate [preauth]" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a33;b=a1aaa111a11aaa111aa000a0101;m=11aa01355bee;t=233ae3a131cf0;x=15b1aa1201a45cdf", "__REALTIME_TIMESTAMP" : "1710831671909616", "__MONOTONIC_TIMESTAMP" : "27194947886062", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "Disconnected from invalid user asdf 111.11.111.111 port 111 [preauth]" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a34;b=a1aaa111a11aaa111aa000a0101;m=11aa01355c42;t=233ae3a131d45;x=123f45a09e00a8a2", "__REALTIME_TIMESTAMP" : "1710831671909701", "__MONOTONIC_TIMESTAMP" : "27194947886146", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "PAM 1 more authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111", "_PID" : "1430615" }      (Example)    But when I look for the logs on the search head, they look like this:     Invalid user asdf from 111.11.111.111 port 111pam_unix(sshd:auth): check pass; user unknownpam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111Failed password for invalid user asdf from 111.11.111.111 port 111 ssh2pam_unix(sshd:auth): check pass; user unknownFailed password for invalid user asdf from 111.11.111.111 port 111 ssh2error: Received disconnect from 111.11.111.111 port 111:11: Unable to authenticate [preauth]Disconnected from invalid user asdf 111.11.111.111 port 111 [preauth]PAM 1 more authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111       Does anyone know why the logs are written together and not to be considered individually? And why the logs are not in JSON format? Can anyone tell me a solution for this on how to fix the problem?   Thank you very much!