All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Ste, how are you? Is > for > Your SPL is using &gr; instead.   | where my_time>=relative_time(now(),"-1d@d") AND my_time<=relative_time(now(),"@d")    
Hi @meshorer,   You can stop that by clicking in your name (top-right) and going to Account Settings. There, you click on Notifications tab and turn off the notifications you won't need, such as Ev... See more...
Hi @meshorer,   You can stop that by clicking in your name (top-right) and going to Account Settings. There, you click on Notifications tab and turn off the notifications you won't need, such as Event Reassigned in this example. Once you save it, you won't receive those emails anymore.  
Hi @saraomd93 , This is pretty generic and can be happening for many different reasons, so trying some: - Maybe there is a PG instance that failed to halt and is still alive. Run a ps -ef | grep ... See more...
Hi @saraomd93 , This is pretty generic and can be happening for many different reasons, so trying some: - Maybe there is a PG instance that failed to halt and is still alive. Run a ps -ef | grep postgres and see if you get any process running. If so, kill the process - Maybe there is a problem on the password set during the upgrade process. Review that against your current configuration and try again - tail the <SOAR_DIR>/var/log/pgbouncer/pgbouncer.log for some hints about what is going wrong. - tail the <SOAR_DIR>/data/db/pg_log/<todays_file>.log for some hints about what is going wrong. - Check if you have enough space on disk on the partition where SOAR is installed (may look a bit dummy but I got surprised a few years back when my disk got full during the upgrade caused by DB backup that was done there).
Hey @user487596, how are you? You can use REST API endpoints, like this example using curl locally on your Splunk instance:   curl -k -u <username>:<password> \ https://localhost:8089/servicesN... See more...
Hey @user487596, how are you? You can use REST API endpoints, like this example using curl locally on your Splunk instance:   curl -k -u <username>:<password> \ https://localhost:8089/servicesNS/nobody/<app>/storage/passwords \ -d name=user1 -d realm=realm1 -d password=password1     In your code, use a prepare method to retrieve your key:   def prepare(self): global API_KEY for passwd in self.service.storage_passwords: if passwd.realm == "<you_realm_key>": API_KEY = passwd.clear_password if API_KEY is None or API_KEY == "defaults_empty": self.error_exit(None, "No API key found.")     Documentation can be found here 
Hi folks! I want to create a custom GeneratingCommand that makes a simple API request, but how do I save the API key in passwords.conf? I have a default/setup.xml file with the following content:  ... See more...
Hi folks! I want to create a custom GeneratingCommand that makes a simple API request, but how do I save the API key in passwords.conf? I have a default/setup.xml file with the following content:   <setup> <block title="Add API key(s)" endpoint="storage/passwords" entity="_new"> <input field="password"> <label>API key</label> <type>password</type> </input> </block> </setup>   But when I configure the app, the password (API key) is not saved in the app folder (passwords.conf). And if I need to add several api keys, how can I assign names to them and get information from the storage? I doubt this code will work:   try: app = "app-name" settings = json.loads(sys.stdin.read()) config = settings['configuration'] entities = entity.getEntities(['admin', 'passwords'], namespace=app, owner='nobody', sessionKey=settings['session_key']) i, c = entities.items()[0] api_key = c['clear_password'] #user, = c['username'], c['clear_password'] except Exception as e: yield {"_time": time.time(), "_raw": str(e)} self.logger.fatal("ERROR Unexpected error: %s" % e)    
Yes. The naming is a bit confusing here...
Ok. This might call for some more troubleshooting but what I'd check 1. Whether the contents of the etc/apps are the same on all nodes. 2. What is the status of the shcluster. 3. If it's always th... See more...
Ok. This might call for some more troubleshooting but what I'd check 1. Whether the contents of the etc/apps are the same on all nodes. 2. What is the status of the shcluster. 3. If it's always the same sh that is not in sync? And if it's "both ways" out of sync - changes on other shs are not replicated to this one and changes on this one are not replicated to other ones. Check the connectivity within the cluster, check logs.  
Hi bishida, Any other sugestion?
To configure log observer connect to Splunk Enterprise running on a private network, there will be additional considerations for you. You will need some help from your private networking team to allo... See more...
To configure log observer connect to Splunk Enterprise running on a private network, there will be additional considerations for you. You will need some help from your private networking team to allow incoming traffic from O11y Cloud. Note the IP addresses of this incoming traffic on this doc page: https://docs.splunk.com/observability/en/logs/set-up-logconnect.html#logs-set-up-logconnect A typical approach for this scenario is to use a load balancer (e.g., F5) to listen for this incoming traffic and then pass the request to the Splunk search head on your private network. Using a load balancer is nice because you can manage the ssl cert at the balancer. If you configure a true pass-through to the search head (e.g. port forwarding), then you will need to configure an ssl cert on the Splunk search head management interface which adds steps. The fact that you have an OTel collector running on your Splunk Enterprise host doesn’t affect this scenario with log observer connect.
If you have multiple cores in that HF and if it runs e.g. DB Connect then you should add pipelines into it. That increase it's performance. Usually it's said that don't use more pipelines than 2 on yo... See more...
If you have multiple cores in that HF and if it runs e.g. DB Connect then you should add pipelines into it. That increase it's performance. Usually it's said that don't use more pipelines than 2 on your node unless it's physical server and it's HF. There are some articles/post/blogs about this, where you could found more information about it.
And never try to manage DS by itself! This will not end nice! Be careful as some people confuse deployment server and SHC deployer. As already told those are different tools/roles and you must use co... See more...
And never try to manage DS by itself! This will not end nice! Be careful as some people confuse deployment server and SHC deployer. As already told those are different tools/roles and you must use correct one.
Hello @bishida, Thanks for sharing the information. As per the document Splunk Enterprise it says "Choose this option if you manage Splunk Enterprise in a data center or public cloud. Follow the st... See more...
Hello @bishida, Thanks for sharing the information. As per the document Splunk Enterprise it says "Choose this option if you manage Splunk Enterprise in a data center or public cloud. Follow the steps in the wizard to securely connect to Splunk Enterprise instance and query logs data using Log Observer." If we are using Splunk Enterprise for logging and want to forward data to the Observability Cloud, is it possible for the Splunk Enterprise host to be on a private network? If yes, what additional steps or configurations are needed to enable the Splunk Enterprise host to transfer data to the Observability Cloud? Additionally, can this be achieved if the splunk-otel-collector.service is running on the Splunk Enterprise host in private network? Thanks  
Here is a working example of statsd receiver: After you restart the collector, it will be listening on UDP port 8125. Since this is UDP and not TCP, you can't test the port like you normal... See more...
Here is a working example of statsd receiver: After you restart the collector, it will be listening on UDP port 8125. Since this is UDP and not TCP, you can't test the port like you normally would and get a response. Send a test metric to that port and then search for it in the Metric Finder in O11y Cloud. echo "statsd.test.metric:42|c|#mykey:#myval" | nc -w 1 -u -4 localhost 8125
The query checks the lookup file, but then does nothing with it.  That's why all events are counted.  Try this index=abc |rex field=data "\|(?<data>[^\.|]+)?\|(?<Event_Code>[^\|]+)?\|" | lookup dat... See more...
The query checks the lookup file, but then does nothing with it.  That's why all events are counted.  Try this index=abc |rex field=data "\|(?<data>[^\.|]+)?\|(?<Event_Code>[^\|]+)?\|" | lookup dataeventcode.csv Event_Code OUTPUT Event_Code as found | where isnotnull(found) | timechart span=1d dc(Event_Code) If the Event_Code field did not need to be extracted via rex then we could have used inputlookup to give Splunk a list of codes to search for.
Hi @Ste , Please share the code of your dashboard with the error using the "Insert/Edit Code Cample" button. Ciao. Giuseppe
Hi @secure , if you want to filter results from main search using the Event_Codes from the lookup, you must use a subsearch: index=abc | rex field=data "\|(?<data>[^\.|]+)?\|(?<Event_Code>[^\|]+)?... See more...
Hi @secure , if you want to filter results from main search using the Event_Codes from the lookup, you must use a subsearch: index=abc | rex field=data "\|(?<data>[^\.|]+)?\|(?<Event_Code>[^\|]+)?\|" | search [ | inputlookup dataeventcode.csv | fields Event_Code ] | timechart span=1d dc(Event_Code) If you extract the Event_Code field before the search as a field, you can put the subsearch in the main search. Ciao. Giuseppe
At one time, parsing on an HF actually made the indexers work *harder*, but I'm not sure that's still the case. HFs should off-load some SVCs from your Splunk Cloud indexers. HFs will increase the ... See more...
At one time, parsing on an HF actually made the indexers work *harder*, but I'm not sure that's still the case. HFs should off-load some SVCs from your Splunk Cloud indexers. HFs will increase the network traffic to Splunk Cloud.
Hi All i have a csv look up with below data Event_Code AUB01 AUB36 BUA12 i want to match it with a dataset which has field name  Event_Code with several values i need to extract the count... See more...
Hi All i have a csv look up with below data Event_Code AUB01 AUB36 BUA12 i want to match it with a dataset which has field name  Event_Code with several values i need to extract the count of the event code per day from the matching csv lookup  my query index=abc |rex field=data "\|(?<data>[^\.|]+)?\|(?<Event_Code>[^\|]+)?\|" | lookup dataeventcode.csv Event_Code | timechart span=1d dc(Event_Code) however the result is showing all 100 count per day instaed of matching the event code from the CSV and then give the total count per day
Splunk Observability Cloud relies on the Splunk Core Platform (Splunk Cloud or Splunk Enterprise) for logging capabilities. So, logs aren’t sent directly to Observability Cloud—you send them to Splun... See more...
Splunk Observability Cloud relies on the Splunk Core Platform (Splunk Cloud or Splunk Enterprise) for logging capabilities. So, logs aren’t sent directly to Observability Cloud—you send them to Splunk Cloud/Enterprise and then pull them in to view with the Log Observer Connect integration in Observability Cloud. When you click to "Log Observer" in Observability Cloud, the logs you see are brought in to view at that moment by reading them from your Splunk Cloud/Enterprise.
Thanks for the feedback. My understanding is that I would gain performance in the future. Am I wrong? I am currently using field extract in splunk cloud.