All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I finally figured out the issue.  We have Splunk Observability Cloud.  I had setup Log Observer to connect to the Enterprise instance.  Part of the setup is to add indexes_list_all to the authorize.c... See more...
I finally figured out the issue.  We have Splunk Observability Cloud.  I had setup Log Observer to connect to the Enterprise instance.  Part of the setup is to add indexes_list_all to the authorize.conf file and make sure it's not checked for the new role you create for LO.  I didn't realize without that entry in authorize.conf it was enabled for all roles.  Once I added the capability in the file it became disabled for all roles.  I added that to the admin role and now all the stats are displaying on the indexes page.
Can you please help to share a documentation or some reference?
how can I monitoring an user if he is using the wireless in the company? thank you!
Hi @PickleRick I don't see anyAPI GET that shows Pod or Status any help/ guidance would be appreciated.  
My template consists of: app_directory_name local app.conf metadata local.meta app.conf [install] state = enabled [ui] is_visible = 0 label = # I usually use the exact app directory n... See more...
My template consists of: app_directory_name local app.conf metadata local.meta app.conf [install] state = enabled [ui] is_visible = 0 label = # I usually use the exact app directory name here [package] check_for_updates = 0   local.meta [] access = read : [ * ], write : [ admin ] export = system
Adding this attribute  enableCertBasedUserAuth = true \ to web.conf, generates the below proxy error The proxy server received an invalid response from an upstream server. The proxy server coul... See more...
Adding this attribute  enableCertBasedUserAuth = true \ to web.conf, generates the below proxy error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request Reason: Error reading from remote server
Hi @Khalid.Rehan, I found this AppDynamics Docs page that I think would be helpful. Please have a read and let me know if it helped. https://docs.appdynamics.com/appd/22.x/latest/en/end-user-mon... See more...
Hi @Khalid.Rehan, I found this AppDynamics Docs page that I think would be helpful. Please have a read and let me know if it helped. https://docs.appdynamics.com/appd/22.x/latest/en/end-user-monitoring/browser-monitoring/browser-real-user-monitoring/enable-the-content-security-policy-csp
@ryangoodwin angle brackets will work, but you've got it backwards.  It should be <http://url|your-text> .
_What_ you're ingesting is entirely up to you. Even if you're pulling or receiving extra data because the source serves it you can always filter it out during ingestion process. Some cases require mo... See more...
_What_ you're ingesting is entirely up to you. Even if you're pulling or receiving extra data because the source serves it you can always filter it out during ingestion process. Some cases require more data, some less, some people need the full original events retained for investigation/evidence purposes, some don't. So there is no one-size-fits all solution either with Splunk or any other vendor. So writing here that Splunk maliciously publishes apps to pump up your license usage is simply you spreading FUD, please refrain from doing so. Especially digging up an old thread just to do so. Of course you can use various external tools to manipulate your data before ingesting it into Splunk. You can even mutilate your data to the point that it won't fit any widely-used apps and solutions so you will save some storage but will have to manually do many things for which normally there are ready-made apps. It's your choice. The problem with _any_ log management/SIEM solution (data analytics maybe less so but also not unheard of) is that people don't know _what_ and _why_ they want ingested and end up pulling everything "just in case".
I highly recommend not using the built in Splunk apps anymore, especially for monitoring operating system logs. Most of the Splunk apps have been updated to not just pull in the OS logs, but pull in ... See more...
I highly recommend not using the built in Splunk apps anymore, especially for monitoring operating system logs. Most of the Splunk apps have been updated to not just pull in the OS logs, but pull in tons of telemetry data also. This can result in orders of magnitude of increased data going into Splunk. The apps Splunk creates serve Splunk's bottom line not the customers best interest. If you want an affordable and manageable Splunk installation, I'd suggest against using any Splunk apps and instead recommend running data through a different product first to shrink and enhance your data before hitting Splunk. Moving away from Splunk apps will save a massive amount of our storage usage if you don't need telemetry data in a system that is way to expensive to store telemetry data in. 
I think you're overthinking it. You already have those events in rsyslog as you're using it to receive the events in the first place. So instead of saving it to files and then bending over backwards ... See more...
I think you're overthinking it. You already have those events in rsyslog as you're using it to receive the events in the first place. So instead of saving it to files and then bending over backwards sending them over syslog, just send them directly from those rsyslogs to final destinations. Rsyslog is very flexible with templates for sending data away.
Thank you, @ITWhisperer! Your solution works great for me. I understand that JOIN is not a perfect solution, especially with a 50k limitation on the right side. I need to read @PickleRick's guidance ... See more...
Thank you, @ITWhisperer! Your solution works great for me. I understand that JOIN is not a perfect solution, especially with a 50k limitation on the right side. I need to read @PickleRick's guidance a few times to absorb it, I am looking for methods other than JOINs. Thank you!
Since you have performed a tcpdump on the incoming packet you know that the format is based upon source configuration or by application design. 1) Can you correlate the format to specific hosts lead... See more...
Since you have performed a tcpdump on the incoming packet you know that the format is based upon source configuration or by application design. 1) Can you correlate the format to specific hosts leading me to believe a configuration at the source as a possible solution 2) Do the specific hosts have different versions of fortinet installed, perhaps the vendor has modified the syslog message format between releases (unlikely but not impossible) 3) Are the message formats correlated to specific message types or services within the fortinet services, this would be less likely fixable via configuration.  The RFC for syslog was more of a suggestion than a hard rule and this is why vendors and applications don't often have a standard implementation of the recommended fields.  You have the potential through rsyslog destination configurations the opportunity to manipulate the output to add fields or clone fields into a specific order but most security rules indicate you should not modify logs in transit.  That might not be an issue for your implementation so you can look into it. The duplicate ingestion is interesting and much harder to pinpoint.  Depending upon frequency can you get a tcpdump to indicate if the message was generated at the source twice or did the UF monitoring the file have a hiccup.
As @PickleRick says, join is rarely the best option and something based on the approach outlined may be your way forward. Having said that, if you still want to go forward with a join, to answer your... See more...
As @PickleRick says, join is rarely the best option and something based on the approach outlined may be your way forward. Having said that, if you still want to go forward with a join, to answer your question about how to get earliest - 60m, you could do this index=myindex | fields a, b, c | join type=inner left=l right=r where l.keyid=r.keyid [search index=myindex [ | makeresults | addinfo | eval latest=info_max_time | eval earliest=relative_time(info_min_time, "-60m") | table earliest latest] |fields d, f ] | table l.a, l.b, l.c, r.d, r.f  The addinfo command tells you what time parameters were used for the initial search.
Unfortunately there is no such information field in the REST API for users, and the timestamp of the passwd file is updated when any user is changed. The best option I could find is to search the _a... See more...
Unfortunately there is no such information field in the REST API for users, and the timestamp of the passwd file is updated when any user is changed. The best option I could find is to search the _audit index for the "change_own_password" or "password change" actions. Hopefully the last password change of the admin account occurred during your data retention period for this index. index=_audit user=admin action=*password*
Did you ever find an answer or a solution other than adding tty?
Technically speaking, there is no requirement for minimum number of set of files. You could, if I remember correctly, even push an empty app (which can make sense in some scenarios). But there are s... See more...
Technically speaking, there is no requirement for minimum number of set of files. You could, if I remember correctly, even push an empty app (which can make sense in some scenarios). But there are some good practices to follow. The contents of the app look like this: https://dev.splunk.com/enterprise/docs/developapps/createapps/appanatomy/ There are some rules of thumb when building an app (and some of them are enforced if you want to pass appinspect when vetting an app to Cloud or uploading to Splunkbase). 1. You should put your files in the default directory. The local directory is for users to overwrite the defaults (this one is not that important if you're writing just a one-off internal app but it's worth remembering) 2. There should be proper permission entries in metadata/default.meta (again - it's not that important if it's default.meta or local.meta if it's your internal app but it's good to stick to the good practices; and it's not that important if the app is running only in index time) 3. You should have app.conf describing the app (at least have the launcher section with version/description/author) 4. If you define inputs you should define them with typical parameters as default instances and set them to be disabled in default/inputs.conf. Let the user enable them selectively if needed. But if you're creating an app to just enable some inputs locally, by all means, use the local directory and create enabled inputs there. And get a reasonable naming convention for your apps. There is nothing more annoying than having to dig through a a bunch of apps named app1, appA, myapp1, app2023, cisco_TA, TA_cisco, TA_cisco_not_working... and so on.
To be clear, the way I Understand the legal terms of use, only what you input into the AI search is used for training purposes (of the A.I.). Its stored within your cloud stack. This does not share y... See more...
To be clear, the way I Understand the legal terms of use, only what you input into the AI search is used for training purposes (of the A.I.). Its stored within your cloud stack. This does not share your data contained in the results (pulled from your indexers).
Splunk has no way of knowing how many results your search should return. You can, however, "cheat" a bit if you know how many result should there be (or can easily get that with a subsearch for exam... See more...
Splunk has no way of knowing how many results your search should return. You can, however, "cheat" a bit if you know how many result should there be (or can easily get that with a subsearch for example). <your_search> | eventstats count | where count>10 or 20 or whatever | fields - count Now your search will return _any_ results only if it produced at least some minimal number of events. You can use it to trigger an error with require
Generally the answer to "is it possible" is usually "yes" with a possible additional condition that you might need to create a solution to do so yourself. So it's possible meaning that noone forbids ... See more...
Generally the answer to "is it possible" is usually "yes" with a possible additional condition that you might need to create a solution to do so yourself. So it's possible meaning that noone forbids you from doing it but not necessarily meaning that there is a built in way to do so using standard tools. And this is the case with your question - by default there is no mechanism in Splunk that would encrypt the resulting csv/pdf/whatever - you'd need to take the existing sendemail.py and modify it so that it can do encryption - it would require some python skills and some time for development.