hello after I upgraded Splunk to the 9.1.1 version, some parts of the overview page in the distributed monitoring console were not completed and were empty. other tabs in the distributed monitoring ...
See more...
hello after I upgraded Splunk to the 9.1.1 version, some parts of the overview page in the distributed monitoring console were not completed and were empty. other tabs in the distributed monitoring console do not have any problems. Does anyone have any idea what is the problem? Thanks
Hello, I am attempting to Splunk search via Macro. When using the Splunk search UI, the relevant information comes up, however, despite using the same user, the API search doesn’t get the same resul...
See more...
Hello, I am attempting to Splunk search via Macro. When using the Splunk search UI, the relevant information comes up, however, despite using the same user, the API search doesn’t get the same results after further investigation, when looking for the Macro on the website I can see it, but not via the API. What could be causing this?
Hi @bwhite, Some checks: have you other logs (e.g. internal or application) from the missing servers? did you check that the TA_nix was correctly deployed to thos servers? did you check that in ...
See more...
Hi @bwhite, Some checks: have you other logs (e.g. internal or application) from the missing servers? did you check that the TA_nix was correctly deployed to thos servers? did you check that in thos servers the user runnig Splunk has the grants to read files and execute scripts? Ciao. Giuseppe
Hi @splunkguy , to move a standalone Search Head to another standalone SH, you have to make the same steps, obviously without the cluster: make a copy of the Apps to migrate from the old SH to the...
See more...
Hi @splunkguy , to move a standalone Search Head to another standalone SH, you have to make the same steps, obviously without the cluster: make a copy of the Apps to migrate from the old SH to the new one, copy the above Apps in the new SH in $SPLUNK_HOME/etc/apps. Ciao. Giuseppe
Hi @ucorral, let me understand: Splunk read the correct timestamp from the log, did you configure the Timezone in the props.conf (https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/Applytimezon...
See more...
Hi @ucorral, let me understand: Splunk read the correct timestamp from the log, did you configure the Timezone in the props.conf (https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/Applytimezoneoffsetstotimestamps)? Then, did you configure the timezone for the user in GUI [<your_user_name> > Preferences]? Ciao. Giuseppe
Hello @PickleRick Thanks for the heads up, I'll delete them from the props.conf, however the information is still reaching 6 hours late, What could be the best recommendation? Thanks,
I edited the conf files on my local server before deploying, so I know they are all identical. I have 5 servers. I copied the Splunk_TA_nix folder to apps. 3 of the 5 have data showing up for the ...
See more...
I edited the conf files on my local server before deploying, so I know they are all identical. I have 5 servers. I copied the Splunk_TA_nix folder to apps. 3 of the 5 have data showing up for the new "os" index. splunkd.log, in fact the whole splunk/log folder, didn't have any errors. But it also didn't have any mention of "idx=os" on the missing servers. I ran some of the scripts in Splunk_TA_nix/bin in debug mode. No errors. What log file or index do I check to debug the issue?
You mean index=activitylog_activityreceiver Environment="AWS-DEV6"
MessageTemplate="Received Post Method for activity: {Activity}"
| spath input=Properties.Activity
| table ActivityType, ClientId,...
See more...
You mean index=activitylog_activityreceiver Environment="AWS-DEV6"
MessageTemplate="Received Post Method for activity: {Activity}"
| spath input=Properties.Activity
| table ActivityType, ClientId, Source, Properties.Activity
Ok, but if you don't have this information in the logs, how should Splunk help here? It's the source's responsibility to produce logs. If you have means of 1) identifying unambigously which instance ...
See more...
Ok, but if you don't have this information in the logs, how should Splunk help here? It's the source's responsibility to produce logs. If you have means of 1) identifying unambigously which instance of a program hit the firewall rule and 2) logging spawning of processes then maybe you could somehow correlate that together. But if you don't have this info how would you like to get it? Guess?
For reasons I don't understand, the problem is fixed when I increase the JVM option --XXMaxJavaStackTraceDepth above 10. Any value below 10 causes the NoClassDefFoundError error. If anyone has an ex...
See more...
For reasons I don't understand, the problem is fixed when I increase the JVM option --XXMaxJavaStackTraceDepth above 10. Any value below 10 causes the NoClassDefFoundError error. If anyone has an explaination, much appreciated (does it have anything to do with maximum-activity-trace-stack default value of 10 ?
It looks as if you were getting your events from the file but getting the date parsed in the wrong timezone. Most of the settings you showed belong on the receiving Splunk instance, not the forwarder...
See more...
It looks as if you were getting your events from the file but getting the date parsed in the wrong timezone. Most of the settings you showed belong on the receiving Splunk instance, not the forwarder. And don't set SHOULD_LINEMERGE to true. This one should almost never be set to true. Also indexed extractions should not be overused. And this case doesn't seem to be one justifying use of indexed extractions (here I disagree with @gcusello )
Generally, system-wide stuff is in etc/system and etc/apps whereas users' content is in etc/users. You need to remember however that your stuff probably depends on add-ons which do the extractions. I...
See more...
Generally, system-wide stuff is in etc/system and etc/apps whereas users' content is in etc/users. You need to remember however that your stuff probably depends on add-ons which do the extractions. If you use datamodels, they should be properly defined and configured. And so on.
There is also an app providing a more flexible email command than the builtin sendemail one. Don't remember the name but ir's easily findable on Splunkbase. Having said that, remember that it's risk...
See more...
There is also an app providing a more flexible email command than the builtin sendemail one. Don't remember the name but ir's easily findable on Splunkbase. Having said that, remember that it's risky to use "external" data this way because you might end up sending emails (and a lot of them sometimes) to non-existent, or empty email addresses.
Hi @PickleRick , Thanks for replying, my issue is that Splunk SH is running on Linux 6 and I have to migrate it to Linux 8 because Splunk 9.1 is not supported on Legacy Linux. So I have built a n...
See more...
Hi @PickleRick , Thanks for replying, my issue is that Splunk SH is running on Linux 6 and I have to migrate it to Linux 8 because Splunk 9.1 is not supported on Legacy Linux. So I have built a new instance and added to cluster, as a standalone SH it can search the data and I can make it primary, but not sure how to copy Dashboards/alerts built by users that are no longer active. So that's where I am looking for options to copy it from old instance to new before making it active.
Well...the idea is relatively easy (you want to capture old SH state and set the new SH to the same state) but the details can be tricky. Generally, you want to move the config files (both system-wi...
See more...
Well...the idea is relatively easy (you want to capture old SH state and set the new SH to the same state) but the details can be tricky. Generally, you want to move the config files (both system-wide and users' ones) and kvstore state. The problem is that if you're migrating, you might not need some stuff if you'll be deploying on a new instance (like certs might need to be generated for a new name) so you'll have to be selective about restoring the configs.
Hello @10061987, You can use fields from the first line of the results in the alert, e.g. $result.email$ assuming your search includes the email field. Then if you trigger the alert for each result ...
See more...
Hello @10061987, You can use fields from the first line of the results in the alert, e.g. $result.email$ assuming your search includes the email field. Then if you trigger the alert for each result (rather than just once), each result will execute the action with its corresponding row from the events. Reference - https://community.splunk.com/t5/Alerting/Splunk-Alerts-How-to-use-email-address-from-variable/m-p/633020#:~:text=You%20can%20use%20fields%20from,corresponding%20row%20from%20the%20events. Please accept the solution and hit Karma, if this helps!