All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi. Would it be possible for us to regularly read the statistics from the Protection Group Runs via Splunk Add-on? These fields, which are also available via Helios, are of interest to us: Start T... See more...
hi. Would it be possible for us to regularly read the statistics from the Protection Group Runs via Splunk Add-on? These fields, which are also available via Helios, are of interest to us: Start Time End Time Duration Status Sla Status Snapshot Status Object Name Source Name Group Name Policy Name Object Type Backup Type System Name Logical Size Bytes Data Read Bytes Data Written Bytes Organization Name This would make it much easier for us to create the necessary reports in Splunk. Thank you very much
Check the logs on the receiving end (the server you're connecting to). You can dump the traffic and check if the TLS negotiation is happening properly but I suspect it does up to a point when you're ... See more...
Check the logs on the receiving end (the server you're connecting to). You can dump the traffic and check if the TLS negotiation is happening properly but I suspect it does up to a point when you're getting refused by the receiving end. But the question is why and that should be in your splunkd.log.
Hi @deckard1984 , do you know the stats command (https://docs.splunk.com/Documentation/SCS/current/SearchReference/StatsCommandOverview)? index=main_sysmon sourcetype=xmlwineventlog process_exec=te... See more...
Hi @deckard1984 , do you know the stats command (https://docs.splunk.com/Documentation/SCS/current/SearchReference/StatsCommandOverview)? index=main_sysmon sourcetype=xmlwineventlog process_exec=test EventCode=11 dest=hosts* | strcat "Event ID: ", EventID " (" signature ")" timestampType | strcat "EventDescription: " EventDescription " | TargetFilename: " TargetFilename " | User: " User activity | strcat EventDescription ": " TargetFilename " by " User details | eval attck = "N/A" | stats count latest(_time) AS _time values(activity) AS activity BY Computer process_name Ciao. Giuseppe
Hi @Ste , with my above solution you can reach your target, otherwise you can use a subsearch (less performant): <your_search> [ search <your_search> | where stoerCode IN ("K02") ... See more...
Hi @Ste , with my above solution you can reach your target, otherwise you can use a subsearch (less performant): <your_search> [ search <your_search> | where stoerCode IN ("K02") | stats count as periodCount by zbpIdentifier | sort -periodCount | head 10 | fields zbpIdentifier ] | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode I prefer the other solution. Ciao. Giuseppe
Right now a have a table list with fields populated where one process_name is repeating across multiples hosts with same EventID.  index=main_sysmon sourcetype=xmlwineventlog process_exec=test Event... See more...
Right now a have a table list with fields populated where one process_name is repeating across multiples hosts with same EventID.  index=main_sysmon sourcetype=xmlwineventlog process_exec=test EventCode=11 dest=hosts* | strcat "Event ID: ", EventID " (" signature ")" timestampType | strcat "EventDescription: " EventDescription " | TargetFilename: " TargetFilename " | User: " User activity | strcat EventDescription ": " TargetFilename " by " User details | eval attck = "N/A" | table Computer , UtcTime, timestampType, activity, Channel, attck, process_name I want to have a total sum of counts per same host and process_name with all activity (or target file names listed under). For e.g Computer | UTC | timestamp | activity | process_name | count | 1 | File list | same - repeats | missing value 2 | File list | same - repeats | missing value  
Maybe this will give you what you are looking for, use the stats to include all the fields, and if you dont want the count in the table add a fields command after like | fields - periodCount  | stat... See more...
Maybe this will give you what you are looking for, use the stats to include all the fields, and if you dont want the count in the table add a fields command after like | fields - periodCount  | stats count as periodCount by zbpIdentifier zbpIdentifier_bp periodCount importZeit_uF | sort -periodCount
Hi @splunklearner , access grants are managed in Splunk at index level, so the best approach is to create different indexes for different grants. Otherwise, you can put all the events in the same i... See more...
Hi @splunklearner , access grants are managed in Splunk at index level, so the best approach is to create different indexes for different grants. Otherwise, you can put all the events in the same index and, when you create roles, you out a filter for each one, e.g. one role can see only events in index X with sourcetype A or source B. Ciao. Giuseppe
Hello, The configuration files contain the following: [sslConfig] enableSplunkdSSL = true sslPassword = value sslRootCAPath = /path/to/ca/cert serverCert = /path/to/srv/cert caTrustStore = splunk... See more...
Hello, The configuration files contain the following: [sslConfig] enableSplunkdSSL = true sslPassword = value sslRootCAPath = /path/to/ca/cert serverCert = /path/to/srv/cert caTrustStore = splunk caTrustStorePath = path/to/trust/ca caPath = path/to/trust/c caCertFile = path/to./ca Yes, the connection was to the management port. The self-signed certificate was only for the web interface (and I have no issues regarding that). However, the problem lies between the components of the architecture.
Here's what I want to achieve: We have several hundreds of boxes sending messages. The boxes are identified by the name in zbpIdentifier.  I want to know the Top ten of the boxes, depending on th... See more...
Here's what I want to achieve: We have several hundreds of boxes sending messages. The boxes are identified by the name in zbpIdentifier.  I want to know the Top ten of the boxes, depending on the number of messages they have sent over a given period of time.  For this Top ten, I want then to display some more data details, that is why I try to "recover" all the data no more available after stats count.  
我想配置我制作的仪表板以显示在这里。
Hello all, Consider we have X application requested on-boarding on to Splunk. Created index for this X application, a new role (restricted to X index) and assigned this role to X AD group. Likewise ... See more...
Hello all, Consider we have X application requested on-boarding on to Splunk. Created index for this X application, a new role (restricted to X index) and assigned this role to X AD group. Likewise we have Y, Z soon application. We do in the same manner. But now the requirement is this X,Y,Z application come under 'A' applications and they want all 'A' team members (probably X,Y,Z combined) to view X,Y,Z applications. How we can achieve this? Can't create single index for all X,Y, and Z application because the logs should not be mixed.
We are migrating the Splunk 9.0.3 Search Head from Virtual box to Physical box. Splunk services were up and running in new Physical box but in Splunk Web UI, I was unable to login using the my auth... See more...
We are migrating the Splunk 9.0.3 Search Head from Virtual box to Physical box. Splunk services were up and running in new Physical box but in Splunk Web UI, I was unable to login using the my authorized credentials and found the below error in Splunkd.log   01-21-2025 05:18:05.218 -0500 ERROR ExecProcessor [3275615 ExecProcessor] - message from "/apps/splunk/splunk/etc/apps/splunk_app_db_connect/bin/server.sh" action=task_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator
if it's wrong ... so how it works in search? my result are correct until I use my search in dashboard
Hi @woodman2 , as I said, dollar char at the borders of a word is the way in Splunk to identify tokens, so you cannot use this format for your fields. you mast modify your searches and you data str... See more...
Hi @woodman2 , as I said, dollar char at the borders of a word is the way in Splunk to identify tokens, so you cannot use this format for your fields. you mast modify your searches and you data structure. Ciao. Giuseppe
Door and Doorname are field name that exist in my search result they have values and my search works fine with them unless I use them in a dashboard because it counts them as tokens and not taking th... See more...
Door and Doorname are field name that exist in my search result they have values and my search works fine with them unless I use them in a dashboard because it counts them as tokens and not taking their values from my results
Hi @woodman2 , at first, don't use the search command after the main search, because you have a slower search. then, what are $Door$ and $Doorname$? in Splunk they are tokens defined in a dashboar... See more...
Hi @woodman2 , at first, don't use the search command after the main search, because you have a slower search. then, what are $Door$ and $Doorname$? in Splunk they are tokens defined in a dashboard. If you have a variable or a field with this name it cannot run in a search. Ciao. Giuseppe
I have such a search and it works fine but not in Dashboard!         index=unis | search *sarch* | eval name = coalesce(C_Name, PersonName) | eval "DoorName"=if(sourcetype=="ARX:db", $Door$,$Doo... See more...
I have such a search and it works fine but not in Dashboard!         index=unis | search *sarch* | eval name = coalesce(C_Name, PersonName) | eval "DoorName"=if(sourcetype=="ARX:db", $Door$,$DoorName$)       when I use this is in a dashboard it looks for Door and DoorName as tokens while they are values of those fields what should I do to make it work in dashboard studio error I get : Set token value to render visualization $Door$ $DoorName$ edit: if I remove all $  it still works same as in search but still not working in dashboard (without any error) it returns result but DoorName field will be empty
Yes. The result is the same.    
Are you sending the logs directly to Splunk Cloud or thru a Intermediate Forwarder? An app with props.conf and transforms.conf uploaded to Splunk Cloud is run on the Search Head. In my cases I ha... See more...
Are you sending the logs directly to Splunk Cloud or thru a Intermediate Forwarder? An app with props.conf and transforms.conf uploaded to Splunk Cloud is run on the Search Head. In my cases I had to install the app on the Intermediate Forwarder that sends on-prem logs to Splunk Cloud, when it worked as it had done before migrating to the cloud.
@annielee have you tried using the relevant .whl file for the 2 libs and adding to the app as a wheel dependency?  --- Hope this helped? Happy SOARing! ---