All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Giuseppe, May I have a pointer to splunk document for this "if yes, the grants for this field are App or Global?"? Thank you,
Hi @josephp , did you defined on the Search Head the field extraction for critC field? if yes, the grants for this field are App or Global? if App, are you in the same App? Ciao. Giuseppe
Hi,  We recently migrated from a standalone Search Head to a clustered one. However, we are having some issue running some search commands. For example, this is a query that is not working on the ne... See more...
Hi,  We recently migrated from a standalone Search Head to a clustered one. However, we are having some issue running some search commands. For example, this is a query that is not working on the new SH cluster sourcetype=dataA index=deptA | where critC > 25   On the old search head, this query runs fine and we see the results as expected. But on the SH cluster, this doesn't yield anything.  I have run the "sourcetype=dataA index=deptA" search query by itself, and they both see the same events. I am not sure why the search with (| where citC > 25) on the standalone SH would work and the cluster would not. Any help would be appreciated. Thank you  
It is clean at startup   SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:fallback... SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful ... See more...
It is clean at startup   SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:fallback... SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:events... syslog-ng checking config sc4s version=3.34.1 Configuring the health check port to: 8080 [2025-01-21 13:36:54 +0000] [135] [INFO] Starting gunicorn 23.0.0 [2025-01-21 13:36:54 +0000] [135] [INFO] Listening at: http://0.0.0.0:8080 (135) [2025-01-21 13:36:54 +0000] [135] [INFO] Using worker: sync [2025-01-21 13:36:54 +0000] [138] [INFO] Booting worker with pid: 138 starting syslog-ng
errors - - syslog-ng 149 - [meta sequenceId="100"]Server disconnected while preparing messages for sending, trying again; driver='d_hec_fmt_other#0', location='root generator dest_hec:5:5', wor... See more...
errors - - syslog-ng 149 - [meta sequenceId="100"]Server disconnected while preparing messages for sending, trying again; driver='d_hec_fmt_other#0', location='root generator dest_hec:5:5', worker_index='3', time_reopen='10', batch_size='2' host = dev-ipz001-splunk05 source = sc4s sourcetype = sc4s:events   1/21/25 2:41:42.705 PM   - - syslog-ng 149 - [meta sequenceId="100"]http: error sending HTTP request; url='https://somehost.com:3001/services/collector/event', error='Failed sending data to the peer', worker_index='3', driver='d_hec_fmt_other#0', location='root generator dest_hec:5:5' host = splunk05 source = sc4s sourcetype = sc4s:events
I am getting this all of the time and A the index exists and i can test it with curl and when sc4s starts it shows it is able to connect - it is annoying.  what else can i check it is not well docume... See more...
I am getting this all of the time and A the index exists and i can test it with curl and when sc4s starts it shows it is able to connect - it is annoying.  what else can i check it is not well documented.
There are several possible approaches but each of them has its own drawbacks. The most obvious three are: 1) Use eventstats to add count to events, sort and limit by the count value. (might be memo... See more...
There are several possible approaches but each of them has its own drawbacks. The most obvious three are: 1) Use eventstats to add count to events, sort and limit by the count value. (might be memory-intensive as I said earlier) 2) Use subsearch to find the count, then search your whole body of data for those events (if you can't use "fast" commands like tstats for your subsearch you might hit all the subsearch-related problems; also you're effectively digging twice through your whole data set) 3) Add more values() aggregations to your stats listing specific fields (might cause problems with "linking" values from different fields; especially if potentially empty fields are involved).
hi. Would it be possible for us to regularly read the statistics from the Protection Group Runs via Splunk Add-on? These fields, which are also available via Helios, are of interest to us: Start T... See more...
hi. Would it be possible for us to regularly read the statistics from the Protection Group Runs via Splunk Add-on? These fields, which are also available via Helios, are of interest to us: Start Time End Time Duration Status Sla Status Snapshot Status Object Name Source Name Group Name Policy Name Object Type Backup Type System Name Logical Size Bytes Data Read Bytes Data Written Bytes Organization Name This would make it much easier for us to create the necessary reports in Splunk. Thank you very much
Check the logs on the receiving end (the server you're connecting to). You can dump the traffic and check if the TLS negotiation is happening properly but I suspect it does up to a point when you're ... See more...
Check the logs on the receiving end (the server you're connecting to). You can dump the traffic and check if the TLS negotiation is happening properly but I suspect it does up to a point when you're getting refused by the receiving end. But the question is why and that should be in your splunkd.log.
Hi @deckard1984 , do you know the stats command (https://docs.splunk.com/Documentation/SCS/current/SearchReference/StatsCommandOverview)? index=main_sysmon sourcetype=xmlwineventlog process_exec=te... See more...
Hi @deckard1984 , do you know the stats command (https://docs.splunk.com/Documentation/SCS/current/SearchReference/StatsCommandOverview)? index=main_sysmon sourcetype=xmlwineventlog process_exec=test EventCode=11 dest=hosts* | strcat "Event ID: ", EventID " (" signature ")" timestampType | strcat "EventDescription: " EventDescription " | TargetFilename: " TargetFilename " | User: " User activity | strcat EventDescription ": " TargetFilename " by " User details | eval attck = "N/A" | stats count latest(_time) AS _time values(activity) AS activity BY Computer process_name Ciao. Giuseppe
Hi @Ste , with my above solution you can reach your target, otherwise you can use a subsearch (less performant): <your_search> [ search <your_search> | where stoerCode IN ("K02") ... See more...
Hi @Ste , with my above solution you can reach your target, otherwise you can use a subsearch (less performant): <your_search> [ search <your_search> | where stoerCode IN ("K02") | stats count as periodCount by zbpIdentifier | sort -periodCount | head 10 | fields zbpIdentifier ] | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode I prefer the other solution. Ciao. Giuseppe
Right now a have a table list with fields populated where one process_name is repeating across multiples hosts with same EventID.  index=main_sysmon sourcetype=xmlwineventlog process_exec=test Event... See more...
Right now a have a table list with fields populated where one process_name is repeating across multiples hosts with same EventID.  index=main_sysmon sourcetype=xmlwineventlog process_exec=test EventCode=11 dest=hosts* | strcat "Event ID: ", EventID " (" signature ")" timestampType | strcat "EventDescription: " EventDescription " | TargetFilename: " TargetFilename " | User: " User activity | strcat EventDescription ": " TargetFilename " by " User details | eval attck = "N/A" | table Computer , UtcTime, timestampType, activity, Channel, attck, process_name I want to have a total sum of counts per same host and process_name with all activity (or target file names listed under). For e.g Computer | UTC | timestamp | activity | process_name | count | 1 | File list | same - repeats | missing value 2 | File list | same - repeats | missing value  
Maybe this will give you what you are looking for, use the stats to include all the fields, and if you dont want the count in the table add a fields command after like | fields - periodCount  | stat... See more...
Maybe this will give you what you are looking for, use the stats to include all the fields, and if you dont want the count in the table add a fields command after like | fields - periodCount  | stats count as periodCount by zbpIdentifier zbpIdentifier_bp periodCount importZeit_uF | sort -periodCount
Hi @splunklearner , access grants are managed in Splunk at index level, so the best approach is to create different indexes for different grants. Otherwise, you can put all the events in the same i... See more...
Hi @splunklearner , access grants are managed in Splunk at index level, so the best approach is to create different indexes for different grants. Otherwise, you can put all the events in the same index and, when you create roles, you out a filter for each one, e.g. one role can see only events in index X with sourcetype A or source B. Ciao. Giuseppe
Hello, The configuration files contain the following: [sslConfig] enableSplunkdSSL = true sslPassword = value sslRootCAPath = /path/to/ca/cert serverCert = /path/to/srv/cert caTrustStore = splunk... See more...
Hello, The configuration files contain the following: [sslConfig] enableSplunkdSSL = true sslPassword = value sslRootCAPath = /path/to/ca/cert serverCert = /path/to/srv/cert caTrustStore = splunk caTrustStorePath = path/to/trust/ca caPath = path/to/trust/c caCertFile = path/to./ca Yes, the connection was to the management port. The self-signed certificate was only for the web interface (and I have no issues regarding that). However, the problem lies between the components of the architecture.
Here's what I want to achieve: We have several hundreds of boxes sending messages. The boxes are identified by the name in zbpIdentifier.  I want to know the Top ten of the boxes, depending on th... See more...
Here's what I want to achieve: We have several hundreds of boxes sending messages. The boxes are identified by the name in zbpIdentifier.  I want to know the Top ten of the boxes, depending on the number of messages they have sent over a given period of time.  For this Top ten, I want then to display some more data details, that is why I try to "recover" all the data no more available after stats count.  
我想配置我制作的仪表板以显示在这里。
Hello all, Consider we have X application requested on-boarding on to Splunk. Created index for this X application, a new role (restricted to X index) and assigned this role to X AD group. Likewise ... See more...
Hello all, Consider we have X application requested on-boarding on to Splunk. Created index for this X application, a new role (restricted to X index) and assigned this role to X AD group. Likewise we have Y, Z soon application. We do in the same manner. But now the requirement is this X,Y,Z application come under 'A' applications and they want all 'A' team members (probably X,Y,Z combined) to view X,Y,Z applications. How we can achieve this? Can't create single index for all X,Y, and Z application because the logs should not be mixed.
We are migrating the Splunk 9.0.3 Search Head from Virtual box to Physical box. Splunk services were up and running in new Physical box but in Splunk Web UI, I was unable to login using the my auth... See more...
We are migrating the Splunk 9.0.3 Search Head from Virtual box to Physical box. Splunk services were up and running in new Physical box but in Splunk Web UI, I was unable to login using the my authorized credentials and found the below error in Splunkd.log   01-21-2025 05:18:05.218 -0500 ERROR ExecProcessor [3275615 ExecProcessor] - message from "/apps/splunk/splunk/etc/apps/splunk_app_db_connect/bin/server.sh" action=task_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator
if it's wrong ... so how it works in search? my result are correct until I use my search in dashboard