All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

But there are many fields inside _raw, so when i execute, table *, there is no problem with that.  BUt when the whole data comes under _raw, those operator changes to xml values.
I have exactly copied my splunk query. My final results are _time, host,source,_raw, there are no more fields in it, whole of the event is in _raw     The one highlighted in green is, when... See more...
I have exactly copied my splunk query. My final results are _time, host,source,_raw, there are no more fields in it, whole of the event is in _raw     The one highlighted in green is, when executed till transaction. The one in pink is with Table command.    
The table command doesn't of itself modify the contents of fields. How are you displaying the contents of the fields?
Hi @sassofrasso  It is not possible to modify the robots.txt file. You may discover the robots.txt file at "$SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/robots.txt" - however changing this ha... See more...
Hi @sassofrasso  It is not possible to modify the robots.txt file. You may discover the robots.txt file at "$SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/robots.txt" - however changing this has no change to the Splunk system, even after a _bump / restart etc.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello , My splunk query is simple:   index=abc,source=xxx.trc | transaction host source max events=100000 | table _time host source _raw   Now when i execute this until transaction command, it... See more...
Hello , My splunk query is simple:   index=abc,source=xxx.trc | transaction host source max events=100000 | table _time host source _raw   Now when i execute this until transaction command, it is fine "<", ">" they appear as it is.   But when i give table command, "<", ">" changed to "&lt;" and "&gt;"   Is there anyway i can prevent this?
@sassofrasso  As far as I know, it may not be possible to customize these files.
Hello everybody, is there a way to customize the default values for robots.txt and sitemap.xml on splunk?
@qq-stan No i haven't installed that. 
Hi  It seems there is some confusion in this thread. Please see the below docs from SentinelOne. I believe there is duplicate of functionality between the apps. (See the Details tab of https://splun... See more...
Hi  It seems there is some confusion in this thread. Please see the below docs from SentinelOne. I believe there is duplicate of functionality between the apps. (See the Details tab of https://splunkbase.splunk.com/app/5433)  The reason you see API inputs on the different apps is due to the duplication in functionality, e.g. the SentinelOne app is able to pull data and also interact with SentinelOne via alert actions, however its not recommended to run on a searchhead unless its a single instance deployment, in which case you would use the SentinelOne app on the SH configured with the API so you can utilise the Alert Actions, and the IA-sentintelone app for the inputs on a HF, does that make sense? Deployment Guide Note: Do not install Add-Ons and Apps on the same system. Single Instance (8.X) (Pre-requisite) Splunk CIM Add-on Only the SentinelOne App (sentinelone_app_for_splunk) Single Instance + Heavy Forwarder (8.X) Single Instance: (Pre-requisite) Splunk CIM Add-on SentinelOne App (sentinelone_app_for_splunk) Heavy Forwarder: IA-sentinelone_app_for_splunk (IA-sentinelone_app_for_splunk) Distributed deployment (8.x) Heavy Forwarder: IA-sentinelone_app_for_splunk (IA-sentinelone_app_for_splunk) Search Head: (Pre-requisite) `Splunk CIM Add-on https://splunkbase.splunk.com/app/1621/`_ SentinelOne App (sentinelone_app_for_splunk) Indexer: TA-sentinelone_app_for_splunk (TA-sentinelone_app_for_splunk)    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
1. Load-balancing syslogs usually doesn't work very well. 2. Your description doesn't make much sense in some points. 3. This is a public community where volutneers share their experience/expertise... See more...
1. Load-balancing syslogs usually doesn't work very well. 2. Your description doesn't make much sense in some points. 3. This is a public community where volutneers share their experience/expertise for common good. If you want a private audit - well, that's a service you're normally paying for. Reach out in your area for a friendly Splunk Partner and ask them to review your environment. 4. Without knowing both the details of your environment as well as seeing what really happened within your environment (checking internal logs if they haven't rolled out yet, maybe verifying some other logs and external monitoring) it's impossible to say what exactly happened. What _might have_ happened is the usual - lack of connectivity, there was enough data buffered so that extra data just overflowed and didn't make it into the buffer. Maybe - as you're saying you had "some issues with indexes" - some data was indexed but got lost. We don't know. And it might or might not, depending on how much data you have on your environment still left, be something that's really only findable on-site by examining the environment in question. As a side note vis-a-vis your sharing "somewhere where it's not private" - are you sure you're in position to freely disclose such information to a third party? Without a prior service agreement and possibly an NDA?
Hi @xiyangyang  No, the Linux UF does not use kernel hook technology like eBPF to monitor or collect data. It relies on reading log files, monitoring system logs, and other user-space data sources. ... See more...
Hi @xiyangyang  No, the Linux UF does not use kernel hook technology like eBPF to monitor or collect data. It relies on reading log files, monitoring system logs, and other user-space data sources. The Universal Forwarder primarily operates by: Monitoring files and directories (often using OS-level file system event notifications like inotify on Linux, or polling). Listening on specified network ports (TCP/UDP) for incoming data. Executing scripts and collecting their standard output. Reading from Windows event logs or other system-specific logging mechanisms. Splunk does use eBPF in other product, but not in the UF. For more on eBPF see https://www.splunk.com/en_us/blog/learn/what-is-ebpf.html and for info on how its used in Splunk Infrastructure Monitoring (part of o11y suite) see https://www.splunk.com/en_us/products/infrastructure-monitoring-features.html  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Isoutamo. Sorry for the late response, had some time off. But If I share our Splunk enviroment, perhaps there would be some places you could recommend for troubleshooting.  Our clients are send... See more...
Hi Isoutamo. Sorry for the late response, had some time off. But If I share our Splunk enviroment, perhaps there would be some places you could recommend for troubleshooting.  Our clients are sending syslogs through Netscaler enviroment to the syslogs servers. Syslog servers are connected to the Splunk Index Cluster, which is connected to the Search Head cluster and Heavy Forwarders.  Heavy forwarders are connected to Universal Forwarder. There was no changes in our network. The logs from this specific host was missing for about 5 hours. Earlier we had some issues with the indexes in our Splunk enviroment, which they are saying is fixed. In this case our syslogs servers have Netscaler between it self and the clients. The logs are at the end send via Internet to a Cybersecurity enviroment. Does this make the picture more clear, or can I send you our diagram over the Splunk envorment (some place where it is not public) ? Brgds DD
I cannot find confirmation now, but if I recall right linux is using inotify to get information in new events on log files with splunk?
@m_zandinia  Great. It’s good to know that changing the report’s permissions fixed the issue. 
Thanks @kiran_panchavat  I did read that page, but probably not carefully enough the first time. After reviewing the known issue more closely, I tested a potential workaround: If the user changes ... See more...
Thanks @kiran_panchavat  I did read that page, but probably not carefully enough the first time. After reviewing the known issue more closely, I tested a potential workaround: If the user changes the report’s permissions to allow read access for everyone, they are then able to delete the report. According to the known issue, users cannot delete private reports. So I thought maybe they can delete public reports—and that turned out to be correct in my case.  
Have you installed IA and TA as well?
@qq-stan Recently we have integrated SentinelOne with Splunk, we installed SentinelOne app on the SH https://splunkbase.splunk.com/app/5433  & configured the data inputs directly on the search head. ... See more...
@qq-stan Recently we have integrated SentinelOne with Splunk, we installed SentinelOne app on the SH https://splunkbase.splunk.com/app/5433  & configured the data inputs directly on the search head. However, in a clustered environment, it is recommended to configure the data inputs on a heavy forwarder and install the SentinelOne app on the search heads for dashboards and visualization.  
Thank you for the response. Unfortunately doesn't answer to my specific question.
@qq-stan  https://splunkbase.splunk.com/app/6056 - This is for SOAR on-prem/SOAR cloud not for Splunk Enterprise.  Check out Splunk base: https://splunkbase.splunk.com/app/5433 Note: Installing t... See more...
@qq-stan  https://splunkbase.splunk.com/app/6056 - This is for SOAR on-prem/SOAR cloud not for Splunk Enterprise.  Check out Splunk base: https://splunkbase.splunk.com/app/5433 Note: Installing the SentinelOne TA or IA on the same node as the App may result in instability or errors   Don't configure the inputs in all three instances, If you have heavy forwarder create the data inputs on that. 
So this app has three parts IA, TA, and main App itself. Installed IA on a forwarder, TA on the Cluster Master, and App - on the search head. All three have API configuration options. So where we ent... See more...
So this app has three parts IA, TA, and main App itself. Installed IA on a forwarder, TA on the Cluster Master, and App - on the search head. All three have API configuration options. So where we enter API settings? I hardly imagine entering on all three.