All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It could a be several things post restart. Get some more info from the _internal logs - this may help further investigate and identify the issue index=_internal sourcetype=splunkd splunk_app_db_c... See more...
It could a be several things post restart. Get some more info from the _internal logs - this may help further investigate and identify the issue index=_internal sourcetype=splunkd splunk_app_db_connect index="_internal" sourcetype=splunkd "db_connect" log_level=ERROR   Check the KVstore status | rest splunk_server=local count=1 /services/server/info | table kvStoreStatus   OR $SPLUNK_HOME/bin/splunk show kvstore-status --verbose   Check DB APP permissions chown -R splunk:splunk $SPLUNK_HOME/etc/apps/splunk_app_db_connect   Sometimes it won’t start due to default certs, as they may have expired,  If using the Splunk default certificates, move or rename the file .old etc the $SPLUNK_HOME/etc/auth/server.pem file and restart Splunk to regenerate the certificate.   Check the JAVA server java –version Make sure it's compatible https://docs.splunk.com/Documentation/DBX/latest/DeployDBX/Prerequisites   See if there some additional help via troubleshooting page https://docs.splunk.com/Documentation/DBX/3.16.0/DeployDBX/TroubleshootingTool   If that all fails to help resolve the issue, log a  support case.              
I cannot find any option for recurring Maintenance Window in ITSI?  E.g Stop alerting daily 11pm to 00:00 (1 hour)?  ITSI have something like cron suppression?  Do not tell me to use REST API agai... See more...
I cannot find any option for recurring Maintenance Window in ITSI?  E.g Stop alerting daily 11pm to 00:00 (1 hour)?  ITSI have something like cron suppression?  Do not tell me to use REST API again      
@airforce Hi  The DB connect is what you need for integration with Snowflake Logging. So go with that.  https://docs.splunk.com/Documentation/DBX  https://splunkbase.splunk.com/app/2686  The Snow... See more...
@airforce Hi  The DB connect is what you need for integration with Snowflake Logging. So go with that.  https://docs.splunk.com/Documentation/DBX  https://splunkbase.splunk.com/app/2686  The Snowflake app is for Splunk SOAR (Security Orchestration And Response) application which is for Security Process Functionality, from your question it appears you don't need that . 
can you give me an idea on how to do it
Just collecting the logs is a great start. If you want to collect technical metrics about user interaction you can use the RUM integration as well. And depending what your backend looks like yo... See more...
Just collecting the logs is a great start. If you want to collect technical metrics about user interaction you can use the RUM integration as well. And depending what your backend looks like you could use the opensource OpenTelemetry libraries to instrument your backend application that processes your web application data. There is even a free and opensource Splunk distribution of OpenTelemetry (including the collector) available. 
That is no relevant to my case since I have no orphaned knowledge base items. Just checked with the instruction from the knowledge base and even with all filters set to 'All' the list turns up empty.
OK. What _exactly_ did you try? (Just saying "tried configuring" doesn't tell us anything - it's just like saying - "I tried to go to my mum's" without even specifying whether you wanted to take a bu... See more...
OK. What _exactly_ did you try? (Just saying "tried configuring" doesn't tell us anything - it's just like saying - "I tried to go to my mum's" without even specifying whether you wanted to take a bus, ride a bike or driving a car). And what was the result? Did you get any errors or other messages? How did you verify that something "is not sent"?
OK. Let's start from the beginning. 1. Monitoring files this way requires your forwarder to run with root permissions in order to be able to read all those files. It might be problematic with your s... See more...
OK. Let's start from the beginning. 1. Monitoring files this way requires your forwarder to run with root permissions in order to be able to read all those files. It might be problematic with your security team and is generally not the best idea (although it sometimes can't be avoided indeed). 2. Monitoring the .bash_history files is not the very good idea for monitoring user activity. You can easily manipulate the bash history, you can turn it off completely or bypass it. There are other ways to monitor user activity (some of them are more convenient, some not, I admit). If you want to limit yourself to just bash and have a log of bash history entries you can set the option syslog_history for bash and have it log to local syslog daemon - it's still not a great and fail-safe solution but it's way better than reading each user's separate file. 3. If you want to stick with your option of reading the .bash_history files, you should make sure your events are timestamped - if environmental variable HISTTIMEFORMAT is set, bash uses its contents to format the timestamp it includes in the history file. This way you can have your entries timestamped. You should make this variable persistent across your whole environment (set it in your /etc/profile.d/). Without it the behaviour will be as you're describing - the events are not timestamped so Splunk has no way of telling when the events are from. 4. I hope you don't have too many users on your box because you might run out of file descriptors if you open to omany files. 5. Oh, and BTW, 7.x has been obsolete for some years now so it would be time to consider upgrade
Hi @shakti, use faster disks if you're using a physical server or dedicated resources if you're using a virtual server and possible SSD disks. Ciao. Giuseppe
@gcusello   Thank you for your reply  The IOPS of indexers and search heads is between 50 -300 ...I guess its pretty less  ...May I know do you have any suggestions how to improve on it?
Hi @Siddharthnegi , please try this https://splunkbase.splunk.com/app/1603 Ciao. Giuseppe
Hi there,  Does any of these Splunk Knowledge Article help?    https://splunk.my.site.com/customer/s/article/Using-the-Splunk-HTTP-Event-Collector-HEC-Video https://splunk.my.site.com/customer/s/... See more...
Hi there,  Does any of these Splunk Knowledge Article help?    https://splunk.my.site.com/customer/s/article/Using-the-Splunk-HTTP-Event-Collector-HEC-Video https://splunk.my.site.com/customer/s/article/HEC-Endpoint-and-Test-HEC  
the link you have provided is give 404 not found error @gcusello  
Hi @Siddharthnegi , see in the Splunk Dashboard Example app (https://splunkbase.splunk.com/app/1603) the example "Null Search Swapper, that describes how to replace a panel with a message when no re... See more...
Hi @Siddharthnegi , see in the Splunk Dashboard Example app (https://splunkbase.splunk.com/app/1603) the example "Null Search Swapper, that describes how to replace a panel with a message when no results. Ciao. Giuseppe
Hi there,  A  quick search of the Splunk Knowledge base find this article: https://splunk.my.site.com/customer/s/article/AQR-errors-in-internals-logs   Workaround - I. To find the orphaned Knowl... See more...
Hi there,  A  quick search of the Splunk Knowledge base find this article: https://splunk.my.site.com/customer/s/article/AQR-errors-in-internals-logs   Workaround - I. To find the orphaned Knowledge Objects - 1. Select Settings > All configurations. 2. Click Reassign Knowledge Objects. 3. Click Orphaned to filter out non-orphaned objects from the list. 4. After filtering out the Orphaned KO we have to reassign them to the active users. II. Reassign knowledge objects to another owner - 1. For the knowledge object that you want to reassign, click Reassign in the Action column. 2. Click Select an owner and select the name of the person that you want to reassign the knowledge object to. 3. Click Save to save your changes.
Hi @LizAndy123 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
So i have a dashboard and I want to print custom message when there are 0 results. but in the dashboard i am working on i am using geostats command for map usage , so  when the result comes zero a c... See more...
So i have a dashboard and I want to print custom message when there are 0 results. but in the dashboard i am working on i am using geostats command for map usage , so  when the result comes zero a custom message should be shown on the top of the panel  so I want the custom message on top of this image
No, unfortunately not. It is bothering but can be worked around by using Splunk itself to analyze the logs and ignore that message at search time.  This will show all messages w/o INFO and the befor... See more...
No, unfortunately not. It is bothering but can be worked around by using Splunk itself to analyze the logs and ignore that message at search time.  This will show all messages w/o INFO and the before mentioned messages: index="_internal" sourcetype=splunkd NOT INFO NOT "AQR and authentication extensions not supported. Or authentication extensions is supported but is used for tokens only"
Hi. I'm using Splunk Enterprise 7.3.2 and installed universal forwarder 8.2.6 on Linux. I was asked to monitor the .bash_history file, so I installed the universal forwarder and checked that data i... See more...
Hi. I'm using Splunk Enterprise 7.3.2 and installed universal forwarder 8.2.6 on Linux. I was asked to monitor the .bash_history file, so I installed the universal forwarder and checked that data is coming into Splunk. However, in a real-time search, most of the files are imported as well as newly added data. So monitoring is difficult because previously events are mixed with real-time events. When I do a real-time search again, the _time field of the previously imported event and the newly added event is the same. Is it related to this? Does anyone know how to solve this problem? + inputs.conf settings [monitor:///home/*/.bash_history] index=test sourcetype=test_add disabled=false crcSalt = <SOURCE> [monitor:///root/.bash_history] index=test sourcetype=test_add disabled=false crcSalt = <SOURCE>
Hello, Were you able to resolve this? I'm having the same issue. Thanks.