All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@luizlimapg is correct.  If you copy and paste your search into Simple XML code window (or Dashboard Studio code window for that matter), some special characters will be interpreted by the XML engine... See more...
@luizlimapg is correct.  If you copy and paste your search into Simple XML code window (or Dashboard Studio code window for that matter), some special characters will be interpreted by the XML engine (or the JSON engine).  If you need to do that, use HTML entities to represent these special characters. It is best to avoid this, however.  If you have a panel, copy and paste your search code into the Search popup. (Similarly in the search box under Input.)
This is one of few occasions that transaction command is appropriate.  Something like   | rename "Log text" as LogText | transaction maxspan=120s startswith="LogText = disconnected" endswith="LogTe... See more...
This is one of few occasions that transaction command is appropriate.  Something like   | rename "Log text" as LogText | transaction maxspan=120s startswith="LogText = disconnected" endswith="LogText = connected" keeporphans=true | where isnull(closed_txn)   Your mock data would give LogText Row _time closed_txn duration eventcount field_match_sum linecount disconnected 5 2024-12-17 08:10:30           disconnected 4 2024-12-17 08:00:10           Here is an emulation of your mock data.   | makeresults format=csv data="Row, _time, Log text 1, 7:00:00am, connected 2, 7:30:50am, disconnected 3, 7:31:30am, connected 4, 8:00:10am, disconnected 5, 8:10:30am, disconnected" | eval _time = strptime(_time, "%I:%M:%S%p") | sort - _time ``` data emulation above ```   Play with the emulation and compare with real data.
Hello isoutamo, Thanks for your help!  I was able to log into one of the indexers and manually set frozenTimePeriodInSecs to a lower value.  This seemed to then allow me to Validate and Check, and t... See more...
Hello isoutamo, Thanks for your help!  I was able to log into one of the indexers and manually set frozenTimePeriodInSecs to a lower value.  This seemed to then allow me to Validate and Check, and then Push the new bundle from the Cluster Manager. So, it seems things are much more stable and the errors and warnings have disappeared.  But my indexers are still showing about 94% full for the /opt/splunk folder.
Are you trying to install the most recent version of SOAR? If so, upgrade to postgresql 15 if you can. The documentation is unclear but that's essentially required for 6.3. We ran into trouble trying... See more...
Are you trying to install the most recent version of SOAR? If so, upgrade to postgresql 15 if you can. The documentation is unclear but that's essentially required for 6.3. We ran into trouble trying to upgrade with postgresql 12. I can only imagine 11 has problems as well.
Yes, thank you @bowesmana 
Thanks for the response. I've tweaked my logic to reduce the number of lines I need in my base search making sure I do a stats in my base search before the chain. Closing this out
Hi there, Have a read here https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/Knowledge/Usesummaryindexing#Get_started_with_summary_indexing    cheers, MuS
Hi dtaylor, You have seen this https://community.splunk.com/t5/Splunk-Search/How-to-compare-fields-over-multiple-sourcetypes-without-join/m-p/113477 since you are already thinking of using `stats`  ... See more...
Hi dtaylor, You have seen this https://community.splunk.com/t5/Splunk-Search/How-to-compare-fields-over-multiple-sourcetypes-without-join/m-p/113477 since you are already thinking of using `stats`    The important thing is really to get a common field from the various data sets and use that in your stats in your case you could use the field `src_mac` as simple as  | stats values(*) AS * by _time src_mac after your base search should work as long as you get src_mac for all data sets.   Hope this helps ... cheers, MuS  
I've been working on a search that I *finally* managed to get working that would look for events generated by a provided network switch and port name and then gives me all the devices that have conne... See more...
I've been working on a search that I *finally* managed to get working that would look for events generated by a provided network switch and port name and then gives me all the devices that have connected to that specific port over a period of time. Fortunately, most of the device data is included alongside the events which contain the switch/port information.....that is....evenything except the hostname. Because of this, I've tried to use the join command to perform a second search through a second data set which contains the hostnames for all devices which have connected to the network and match those hostnames based on the shared MAC address field. The search works, and that's great, but it can only work over a time period of about a day or so before the subsearch breaks past the 50k event limit. Is these anyway I can get rid of the join command and maybe use the stats command instead? That's what simialr posts to this one seem to suggest, but I have trouble wrapping my head around how the stats command can be used to correlate data from two different events from different data sets.....in this case the dhcp_host_name getting matched to the corresponding device in my networking logs. I'll gladly take any assistance. Thank you.       index="indexA" log_type IN(Failed_Attempts, Passed_Authentications) IP_Address=* SwitchID=switch01 Port_Id=GigabitEthernet1/0/13 | rex field=message_text "\((?<src_mac>[A-Fa-f0-9]{4}\.[A-Fa-f0-9]{4}\.[A-Fa-f0-9]{4})\)" | eval src_mac=lower(replace(src_mac, "(\w{2})(\w{2})\.(\w{2})(\w{2})\.(\w{2})(\w{2})", "\1:\2:\3:\4:\5:\6")) | eval time=strftime(_time,"%Y-%m-%d %T") | join type=left left=L right=R max=0 where L.src_mac=R.src_mac L.IP_Address=R.src_ip [| search index="indexB" source="/var/logs/devices.log" | fields src_mac src_ip dhcp_host_name] | stats values(L.time) AS Time, count as "Count" by L.src_mac R.dhcp_host_name L.IP_Address L.SwitchID L.Port_Id  
I've piped a Splunk log query extract into a table showing disconnected and connected log entries sorted by time. NB row 1 is fine. Row 2 is fine because it connected within 120 sec. Now I want to ... See more...
I've piped a Splunk log query extract into a table showing disconnected and connected log entries sorted by time. NB row 1 is fine. Row 2 is fine because it connected within 120 sec. Now I want to show "disconnected" entries with no subsequent "connected" row say within a 120 sec time frame.  So, I want to pick up rows 4 and 5. Can someone advise on the Splunk query format for this? Table = Connect_Log Row Time Log text 1 7:00:00am connected 2 7:30:50am disconnected 3 7:31:30am connected 4 8:00:10am disconnected 5 8:10:30am disconnected
Hi @Ste, how are you? Is &gt; for > Your SPL is using &gr; instead.   | where my_time&gt;=relative_time(now(),"-1d@d") AND my_time&lt;=relative_time(now(),"@d")    
Hi @meshorer,   You can stop that by clicking in your name (top-right) and going to Account Settings. There, you click on Notifications tab and turn off the notifications you won't need, such as Ev... See more...
Hi @meshorer,   You can stop that by clicking in your name (top-right) and going to Account Settings. There, you click on Notifications tab and turn off the notifications you won't need, such as Event Reassigned in this example. Once you save it, you won't receive those emails anymore.  
Hi @saraomd93 , This is pretty generic and can be happening for many different reasons, so trying some: - Maybe there is a PG instance that failed to halt and is still alive. Run a ps -ef | grep ... See more...
Hi @saraomd93 , This is pretty generic and can be happening for many different reasons, so trying some: - Maybe there is a PG instance that failed to halt and is still alive. Run a ps -ef | grep postgres and see if you get any process running. If so, kill the process - Maybe there is a problem on the password set during the upgrade process. Review that against your current configuration and try again - tail the <SOAR_DIR>/var/log/pgbouncer/pgbouncer.log for some hints about what is going wrong. - tail the <SOAR_DIR>/data/db/pg_log/<todays_file>.log for some hints about what is going wrong. - Check if you have enough space on disk on the partition where SOAR is installed (may look a bit dummy but I got surprised a few years back when my disk got full during the upgrade caused by DB backup that was done there).
Hey @user487596, how are you? You can use REST API endpoints, like this example using curl locally on your Splunk instance:   curl -k -u <username>:<password> \ https://localhost:8089/servicesN... See more...
Hey @user487596, how are you? You can use REST API endpoints, like this example using curl locally on your Splunk instance:   curl -k -u <username>:<password> \ https://localhost:8089/servicesNS/nobody/<app>/storage/passwords \ -d name=user1 -d realm=realm1 -d password=password1     In your code, use a prepare method to retrieve your key:   def prepare(self): global API_KEY for passwd in self.service.storage_passwords: if passwd.realm == "<you_realm_key>": API_KEY = passwd.clear_password if API_KEY is None or API_KEY == "defaults_empty": self.error_exit(None, "No API key found.")     Documentation can be found here 
Hi folks! I want to create a custom GeneratingCommand that makes a simple API request, but how do I save the API key in passwords.conf? I have a default/setup.xml file with the following content:  ... See more...
Hi folks! I want to create a custom GeneratingCommand that makes a simple API request, but how do I save the API key in passwords.conf? I have a default/setup.xml file with the following content:   <setup> <block title="Add API key(s)" endpoint="storage/passwords" entity="_new"> <input field="password"> <label>API key</label> <type>password</type> </input> </block> </setup>   But when I configure the app, the password (API key) is not saved in the app folder (passwords.conf). And if I need to add several api keys, how can I assign names to them and get information from the storage? I doubt this code will work:   try: app = "app-name" settings = json.loads(sys.stdin.read()) config = settings['configuration'] entities = entity.getEntities(['admin', 'passwords'], namespace=app, owner='nobody', sessionKey=settings['session_key']) i, c = entities.items()[0] api_key = c['clear_password'] #user, = c['username'], c['clear_password'] except Exception as e: yield {"_time": time.time(), "_raw": str(e)} self.logger.fatal("ERROR Unexpected error: %s" % e)    
Yes. The naming is a bit confusing here...
Ok. This might call for some more troubleshooting but what I'd check 1. Whether the contents of the etc/apps are the same on all nodes. 2. What is the status of the shcluster. 3. If it's always th... See more...
Ok. This might call for some more troubleshooting but what I'd check 1. Whether the contents of the etc/apps are the same on all nodes. 2. What is the status of the shcluster. 3. If it's always the same sh that is not in sync? And if it's "both ways" out of sync - changes on other shs are not replicated to this one and changes on this one are not replicated to other ones. Check the connectivity within the cluster, check logs.  
Hi bishida, Any other sugestion?
To configure log observer connect to Splunk Enterprise running on a private network, there will be additional considerations for you. You will need some help from your private networking team to allo... See more...
To configure log observer connect to Splunk Enterprise running on a private network, there will be additional considerations for you. You will need some help from your private networking team to allow incoming traffic from O11y Cloud. Note the IP addresses of this incoming traffic on this doc page: https://docs.splunk.com/observability/en/logs/set-up-logconnect.html#logs-set-up-logconnect A typical approach for this scenario is to use a load balancer (e.g., F5) to listen for this incoming traffic and then pass the request to the Splunk search head on your private network. Using a load balancer is nice because you can manage the ssl cert at the balancer. If you configure a true pass-through to the search head (e.g. port forwarding), then you will need to configure an ssl cert on the Splunk search head management interface which adds steps. The fact that you have an OTel collector running on your Splunk Enterprise host doesn’t affect this scenario with log observer connect.
If you have multiple cores in that HF and if it runs e.g. DB Connect then you should add pipelines into it. That increase it's performance. Usually it's said that don't use more pipelines than 2 on yo... See more...
If you have multiple cores in that HF and if it runs e.g. DB Connect then you should add pipelines into it. That increase it's performance. Usually it's said that don't use more pipelines than 2 on your node unless it's physical server and it's HF. There are some articles/post/blogs about this, where you could found more information about it.