All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Here I have a small picture of how the environment is structured: Red arrow -> Source Splunk TCP (Cribl Stream)   I'm trying to forward the journald data from the Splunk Universal Forw... See more...
Hello, Here I have a small picture of how the environment is structured: Red arrow -> Source Splunk TCP (Cribl Stream)   I'm trying to forward the journald data from the Splunk Universal Forwarder to the Cribl Worker (Black to blue box). I have configured the forwarding of the journald data using the instructions from Splunk. (Get data with the Journald input - Splunk Documentation)   I can forward the journald data and it also arrives at the cribl worker. Problem: the cribl worker cannot distinguish the individual events from the journald data or does not know when a single event is over and thus combines several individual events into one large one. The Cribl Worker always merges about 5-8 journald events. (I have marked the individual events here. However, they arrive as such a block, sometimes more together, sometimes less.) Event 1: Invalid user test from 111.222.333.444port 1111pam_unix(sshd:auth):check pass; userunknownpam_unix(sshd:auth):authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.222.333.444Failed password forinvalid user testfrom 111.222.333.444 port1111 ssh2error: Received disconnect from 111.222.333.444port 1111:13: Unableto authenticate [preauth]Disconnected from invaliduser test 111.222.333.444port 1111 [preauth]   What I tested: If I have the journald data from the universal forwarder not forwarded via a cribl worker, but via a heavy forwarder (The blue box in the picture above is then no longer a Cribl Worker but a Splunk Heavy Forwarder), then the events are individual and easy to read. Like this: Event 1:   Invalid user testfrom 111.222.333.444 port1111   Event 2:   pam_unix(sshd:auth):check pass; userunknown   Event 3:   pam_unix(sshd:auth):authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.222.333.444   Event 4:   Failed password forinvalid user testfrom 111.222.333.444 port1111 ssh2   Event 5:   error: Received disconnectfrom 111.222.333.444 port1111:13: Unable toauthenticate [preauth]   Event 6:   Disconnected from invaliduser test 111.222.333.444port 1111 [preauth]   -------------------------------- I'm looking for a solution that I can send the journald data as shown in the figure above, but the journald data will be sent as in the second case. Thanks in advance for your help.
Anyone know of any examples on SplunkBase that have JavaScript-written commands using the Python SDK? I’ve written about a dozen custom commands using Python and a familiar with that process. The d... See more...
Anyone know of any examples on SplunkBase that have JavaScript-written commands using the Python SDK? I’ve written about a dozen custom commands using Python and a familiar with that process. The dev docs suggest the Splunk SDK for Python should be used for .JS commands but I’m not understanding how that’s possible without importing libraries like Flask. https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/nonpythonscscs
Hello Everyone, I downloaded "TA_Genesys_Cloud_1.0.14_export.tgz" from https://github.com/SplunkBAUG/CCA Added the inputs required and configure the Splunk with available Genesys Cloud CX (Canada R... See more...
Hello Everyone, I downloaded "TA_Genesys_Cloud_1.0.14_export.tgz" from https://github.com/SplunkBAUG/CCA Added the inputs required and configure the Splunk with available Genesys Cloud CX (Canada Region) OAuth Credentials (client ID and Client Secret) When I try to search using index="genesys cloud", it returns with empty response   Any help will be much appreciated. Thanking you in advance  
Hi all, I'm trying to see if logs can be send to different indexes at index time depending on regex.  Is it possible to send logs to index name that is part of Source metadata? Below are my props... See more...
Hi all, I'm trying to see if logs can be send to different indexes at index time depending on regex.  Is it possible to send logs to index name that is part of Source metadata? Below are my props.conf and transforms.conf props.conf: [test:logs] TRANSFORMS-new_index = new_index transforms.conf [new_index] SOURCE_KEY = MetaData:Source REGEX = (?<index>\w+)\-\d+  FORMAT = $1                                       #This needs to be dynamic  DEST_KEY = _MetaData:Index Thanks in advance.
I have this app setup and installed however there are a lot of Panels  which are essentially saved searches or just search string that get ran. But many of these Panels are the same  Is there a lis... See more...
I have this app setup and installed however there are a lot of Panels  which are essentially saved searches or just search string that get ran. But many of these Panels are the same  Is there a list of Panels under FISMA so that a person can combine them to less than say 5 dashboards rather that 3-4 that are the same and 1-2 off sets that are under different categories "IE AC-5 have 3 and AC-18 having 3 with 1 being different among them. Bonus seems to me like there's got to be someone who has done this already and has a their version of a how to guide that would benefit the FISMA world or RMF world. Im just talking about getting what we could to cover as many controls as we can under a few say 3-5 dashboard so we don't have to have people clicking over 20 different places for one item of interest
Hi, I'm working with a dashboard where I have three rows of data. Each row is filtered based on specific conditions. I have enabled drilldown for all these rows for eg: row 1: Panel 1: token na... See more...
Hi, I'm working with a dashboard where I have three rows of data. Each row is filtered based on specific conditions. I have enabled drilldown for all these rows for eg: row 1: Panel 1: token name: r1_drill, condition: success ,  Panel 2: token name: r1_drill, condition: failed row 2 :  Panel 1: token name: r2_drill, condition: success, type: abc , Panel 2: token name: r2_drill, condition: failed, type: abc  row 3:  Panel 1: token name: r3_drill, condition: success, type: def,  Panel 2: token name: r3_drill, condition: failed, type: def The problem is while passing tokens to the piecharts:      Clicking on r3_drill works as expected.      Clicking on r2_drill sets r3_drill conditions as default, causing incorrect filtering.      I need to unset r3_drill when r2_drill is set and vice versa for all. Any guidance or examples would be greatly appreciated. Thanks  
Hello, I've been asked to provide a list of all Alerts/Reports/Dashboards that contain the value "You Found a bug!"  in the underlying Search. I have no idea how to do this :). I manually found one... See more...
Hello, I've been asked to provide a list of all Alerts/Reports/Dashboards that contain the value "You Found a bug!"  in the underlying Search. I have no idea how to do this :). I manually found one Alert that is using a Search that would match: source=bluefletch "details.package"="com.siteone.mobilepro" "details.message.environment"=PROD (event=ErrorEvent OR event=ExceptionEvent) "details.message.additionalInfo.content{}.Title"="You found a bug!" I just need to find all of the other Alerts/Reports/Dashboards that also is using this. Does anyone have any ideas how this can be done.  Thank you for any help on this. Thanks, Tom          
My Log data looks like: i am using this query: index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/CXMLOrders.log" |eval timestamp=strftime(_time, "%F") ... See more...
My Log data looks like: i am using this query: index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/CXMLOrders.log" |eval timestamp=strftime(_time, "%F") | chart limit=30 count as count over TransactionType by timestamp   I have to built report on transaction type, total count date wise     please help to form the query, due to space it is not showing properly TransactionType = cXML OrderRequest TransactionType = cXML ConfirmationRequest     Regards Avik      
I am trying to delete users that just use Splunk authentication. I have the admin role. I have tried both the web GUI and the CLI to delete users, but they are still visible after deletion. But somet... See more...
I am trying to delete users that just use Splunk authentication. I have the admin role. I have tried both the web GUI and the CLI to delete users, but they are still visible after deletion. But something seems to have happened, because, even though the users are still showing up using the list command in the CLI, when I try to delete the user using the remove command, it says the user does not exist. Is there a config file I need to edit to get the users to stop appearing? This is also a clustered Splunk Enterprise environment, does this mean there are further steps I have to take to delete a user? Thanks
Hi, can someone help me with splunk search to identify browsers installed on a machine, im looking for a specific field where i can capture this data. thanks
Show source is not loading for only one event, getting "Failed to find target event in final sorted event list. Cannot properly prune results" after loading
I have a splunk query that has following text in message field -  "message":"sypher:[tokenized] build successful -\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"time\":\"2024-05-31T1... See more...
I have a splunk query that has following text in message field -  "message":"sypher:[tokenized] build successful -\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"time\":\"2024-05-31T12:37:25Z\}}" I need to extract value ABC123XYZ which is between account_id\":\" and \",\"activity. I tried the following query but it's not returning any data. index=prod_logs app_name="abc" | rex field=_raw "account_id\\\"\:\\\"(?<accid>[^\"]+)\\\"\,\\\"activity" | where isnotnull (accid) | table accid  
I installed Snort 3 JSON Alerts add-on. I made changes in inputs.conf (/opt/splunk/etc/apps/TA_Snort3_json/local) like this: [monitor:///var/log/snort/*alert_json.txt*] sourcetype = snort3:alert:js... See more...
I installed Snort 3 JSON Alerts add-on. I made changes in inputs.conf (/opt/splunk/etc/apps/TA_Snort3_json/local) like this: [monitor:///var/log/snort/*alert_json.txt*] sourcetype = snort3:alert:json When I search for events like below (sourcetype="snort3:alert:json") there is NOTHING But Splunk knows in that path there is something and in what number. Like below.   What I can tell more is what Splunk tells me when starting. Value in stanza [eventtype=snort3:alert:json] in /…/TA_Snort3_json/default/tags.conf, line 1 is not URL encoded: eventtype = snort3:alert:json Your indexes and inputs configurations are not internally consistenst. For more info, run ‘splunk btool check –debug’ Please, help..  
Hello all, Our current environment is : Three site clustered, 2 clusters on on-premises(14 indexers 7 indexers in each cluster) and one cluster(7 indexers) is hosted on AWS. The AWS indexers are  ... See more...
Hello all, Our current environment is : Three site clustered, 2 clusters on on-premises(14 indexers 7 indexers in each cluster) and one cluster(7 indexers) is hosted on AWS. The AWS indexers are  clustered  recently. It is almost 15 days, but still the replication factor and search factor are not met. What might be the reason and what are all the possible ways that I can resolve this. There are around 300 fixup tasks pending. The number remained the same for the past 2 weeks. I've manually rolled the buckets but still no use. 
Hello All, I enabled my indicators feature with "/opt/phanton/bin/phenv set_preference --indicators yes"   I have two pronlems that might be connected: 1. I only enabled three fields in the Indic... See more...
Hello All, I enabled my indicators feature with "/opt/phanton/bin/phenv set_preference --indicators yes"   I have two pronlems that might be connected: 1. I only enabled three fields in the Indicators tab under Administarion, but still SOAR created many indicators on fields that configured as disabled. 2. I see that enabling the indicators feature consuming all my free RAM memory, and I have a lot of RAM so I unserstand that there is a problem with this. anyone can say why and how to solve?
I want to separate events by date I want to isolate red highlights that have similar formats. I don't know how. I would appreciate it if you could tell me how.
Hi everyone, Is there a way to speed up the Splunk SOAR capabilities to process the events, it can't process a 100 events every 5 minutes....  I found a solution about the worker but, the file that ... See more...
Hi everyone, Is there a way to speed up the Splunk SOAR capabilities to process the events, it can't process a 100 events every 5 minutes....  I found a solution about the worker but, the file that solution talk about doesn't exists which is "umsgi.ini"
is there a condition or command for manually refreshing dashboard? so whenever i click on refresh button of  dashboard it refreshes, but i want  whenever i refresh dashboard , i want to set a particu... See more...
is there a condition or command for manually refreshing dashboard? so whenever i click on refresh button of  dashboard it refreshes, but i want  whenever i refresh dashboard , i want to set a particular token value to something. is that possible
is there a condition for refreshing a dashboard. like if(dashboard refresh , 0 ,1)  
Hi experts, I am going through installation and set up of Splunk App for Data Science and Deep Learning. Have come across mention of minimum requirement mentioned for transformer GPU container at: ... See more...
Hi experts, I am going through installation and set up of Splunk App for Data Science and Deep Learning. Have come across mention of minimum requirement mentioned for transformer GPU container at: https://docs.splunk.com/Documentation/DSDL/5.1.2/User/TextClassAssistant What are the minimum requirements for CPU only Docker host machine in general when using this tool kit?   Thanks, MCW