All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dear Community, I integrate the FireEye NX with Splunk, but logs are not parsing as expected. I was searching for relevant add-ons and application for FireEye. I found below add-on and app, - h... See more...
Dear Community, I integrate the FireEye NX with Splunk, but logs are not parsing as expected. I was searching for relevant add-ons and application for FireEye. I found below add-on and app, - https://splunkbase.splunk.com/app/1904 (fireeye add on) - https://splunkbase.splunk.com/app/1845 (fireeye App). While i was going through the documentation of these add-on and app, i found it only support Splunk Enterprise platform not Cloud.  Is there any other application or add-on of same functionality on Splunk Cloud?
Hello there, im creating a #Splunk Dashboards table that utilized to monitor user command. And i want to make it flexible and dynamic to view the table by user inpu For now i already create this sea... See more...
Hello there, im creating a #Splunk Dashboards table that utilized to monitor user command. And i want to make it flexible and dynamic to view the table by user inpu For now i already create this search string as table that can apply filter by Find Command and Exclude Command, but it only accept single string as filter.   index=os_linux sourcetype="bash_history" | dedup timestamp | fields _time process, dest, user_name | search user_name=$user_name$ dest=$host_name$ process="$user_command$" NOT process="$exclude_command$" | table _time user_name process dest | rename dest as hostname, process as user_command | sort -_time     It is possible to make the exclude_command accept multiple values with some separator? or another option recomended.  
With specific query, I can get below value for one field: {     "key1": {         "field1": x     },     "key2": {         "field2": xx     },     "key3": {         "field3": xxx     } }... See more...
With specific query, I can get below value for one field: {     "key1": {         "field1": x     },     "key2": {         "field2": xx     },     "key3": {         "field3": xxx     } }   Every time, the string of key1,2,3 are different, and the string of field1,2,3 are also different, even the number of key is different for each query, it may eixst key4, key5...   Now I want to get below table, could someone help on this? Thanks. Name A Name B key1 field1 key2 field2 key3 field3 ... ...
Hello,  I would like to obtain a list of all domains that did NOT match my lookup file which is composed of wildcard domain here is an example : Lookup file domain *adobe.com* *perdu.com* Even... See more...
Hello,  I would like to obtain a list of all domains that did NOT match my lookup file which is composed of wildcard domain here is an example : Lookup file domain *adobe.com* *perdu.com* Events  index=proxy | table dest dest acrobat.adobe.com geo2.adobe.com Result wanted  *perdu.com* My request looks like this  index=proxy | dedup dest | table dest | eval Observed=1 | append [| inputlookup domain.csv | rename domain as dest | eval Observed=0] | stats max(Observed) as Observed by dest | where Observed=0 Obtained results : *adobe.com* *perdu.com* because the request didn't count the lines acrobat.adobe.com and geo2.adobe.com as duplicates of *adobe.com* So what I need is a way to dedup the events based on the dest field matched from the lookup, and rename the dedup dest value like the wildcard domain field in the lookup. This way, mid request I would have these results : dest                              observed *adobe.com*             1                                               ==> from the events *adobe.com*              0                                              ==> from the lookup *perdu.com*             0                                                ==> from the lookup then stats(max) and where to get only the wildcard domains that never matched. how could I achieve that ?
 message: Send async response via rest [url=https://prd.ase1.dbktp-feedloader.prd.gcp.db.com/callbackservice/book, asyncResp={"transactionItems":[{"itemId":"KTPACC1_20240717000001633206_01","status":... See more...
 message: Send async response via rest [url=https://prd.ase1.dbktp-feedloader.prd.gcp.db.com/callbackservice/book, asyncResp={"transactionItems":[{"itemId":"KTPACC1_20240717000001633206_01","status":"FAILED","accountIdentification":{"gtbCashAccount":{"branchCode":"788","accountNumber":"0191395050","currencyCode":"USD"}}},{"itemId":"KTPACC1_20240717000001633206_02","status":"FAILED","accountIdentification":{"gtbCashAccount":{"branchCode":"788","accountNumber":"0000195054","currencyCode":"USD"}}}],"orderStatusResponse":{"orderStatus":"ORDER_FAILURE","orderId":"KTPACC1_20240717000001633206"},"error":{"errorCode":"SEP013","errorDescription":"Cannot find IDMS-0788 account by accNumber: 0000195054"}}] ==================================================================================   am using below query but its not giving me output , any idea  index = app_events_sdda_core_de_prod source="/home/sdda/apps/logs/sep-app/app-json.log" level=TRACE | fields message | rex field=message \"error\":\{\"errorCode\":\"(?<errorCode>[^\"]+)\" | dedup errorCode | table errorCode =====================================================================   However syntax showing correct on https://regex101.com/r/XkBntG/1     
Hi, Im trying to collate URL domain names of users who visit websites over the course of 24 hours. It pulls the right data but it wont table and im not sure how to fix it.  Im using URL Toolbox to... See more...
Hi, Im trying to collate URL domain names of users who visit websites over the course of 24 hours. It pulls the right data but it wont table and im not sure how to fix it.  Im using URL Toolbox to parse the domain out.  index="##" eventtype=pan $user$ hoursago=24 | eval list="mozilla" | `ut_parse_extended(url,list)` | stats count by ut_domain_without_tld | table ut_domain_without_tld count Im fairly new to splunk so any help is appreciated.
Hi All, I have a dropdown as below in dashboard studio I also have a table that has options - All, active groups and inactive groups.  If user clicks on "Inactive groups", it should ... See more...
Hi All, I have a dropdown as below in dashboard studio I also have a table that has options - All, active groups and inactive groups.  If user clicks on "Inactive groups", it should display table that has details of inactive groups, if user clicks on Active groups table with active groups should be displayed. If user clicks on All, then all groups should be displayed. Until any of above chosen table should be hidden.  I have selected "When data is unavailable, hide element" option under visibility in configuration. But I am not getting how to achieve my above use cases. Please can anyone of you help me on this.  Thanks, PNV  
Hello community, I'm encountering an issue while working with custom content in Splunk Security Essentials. I have created a custom content with this search :     ​index=windows sourcetype=WinE... See more...
Hello community, I'm encountering an issue while working with custom content in Splunk Security Essentials. I have created a custom content with this search :     ​index=windows sourcetype=WinEventLog | stats count(eval(action="success")) as successes count(eval(action="failure")) as failures by src | where successes>0 AND failures>100     However, when I navigate to the content under "Content -> Security Content" and attempt to save this as a scheduled search, the option "Save Scheduled Search" is not available. I noticed that in the pre-existing content, such as "Basic Brute Force," this option is present. Could you please advise on why this option might not be appearing for my custom content? Are there any additional steps or configurations required to enable this feature for custom content? Thank you for your assistance! Best regards   Splunk Security Essentials
I am not getting full data in output when combining 2 queries using join.  When I run first query individually, I get 143 results after dedup but upon joining, I am getting 71 results only. Whereas I... See more...
I am not getting full data in output when combining 2 queries using join.  When I run first query individually, I get 143 results after dedup but upon joining, I am getting 71 results only. Whereas I know that for remaining records, data is available when running 2nd query individually. How can I fix this? I am searching for records where pods got claimed and then searching for connected time using subsearch and need output of all columns in tabular format.   index=aws-cpe-scl source=*winserver* "methodPath=POST:/scl/v1/equipment/router/*/claim/pods" responseJson "techMobile=true" | rex "responseJson=(?<json>.*)" | eval routerMac = routerMac | eval techMobile = techMobile | eval status = status | spath input=json path=claimed{}.boxSerialNumber output=podSerialNumber | spath input=json path=claimed{}.locationId output=locationId | eval node_id = substr(podSerialNumber, 0, 10) | eval winClaimTime=strftime(_time,"%m/%d/%Y %H:%M:%S") | table winClaimTime, accountNumber, routerMac, node_id, locationId, status, techMobile | dedup routerMac, node_id sortby winClaimTime | join type=inner node_id [ search index=aws-cpe-osc ConnectionAgent "Node * connected:" model=PP203X | rex field=_raw "Node\s(?<node_id>\w+)\sconnected" | eval nodeFirstConnectedTime=strftime(_time,"%m/%d/%Y %H:%M:%S") | table nodeFirstConnectedTime, node_id | dedup node_id sortby nodeFirstConnectedTime] | table winClaimTime, accountNumber, routerMac, node_id, locationId, status, techMobile, nodeFirstConnectedTime
Can we apply following example on UF? Keep specific events and discard the rest https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Keep_specific_events_and_discard_t... See more...
Can we apply following example on UF? Keep specific events and discard the rest https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Keep_specific_events_and_discard_the_rest The answer is no. The example is for any non-UF instance. For UF you can modify the example Edit props.conf and add the following: [source::/var/log/messages] TRANSFORMS-set= setnull,setparsing Edit transforms.conf and add the following: [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = \[sshd\] DEST_KEY = _TCP_ROUTING FORMAT = <valid-tcpoutgroup(s)> Or Edit props.conf and add the following: [source::/var/log/messages] TRANSFORMS-set= setnull,setparsing Edit transforms.conf and add the following: [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = \[sshd\] DEST_KEY = queue FORMAT = parsingQueue  
I want Splunk to ingest my AV log. I made the following entry in the inputs.conf file: Note: The log file is a text file with no formatting. [monitor://C:ProgramData\'Endpoint Security'\logs\OnDem... See more...
I want Splunk to ingest my AV log. I made the following entry in the inputs.conf file: Note: The log file is a text file with no formatting. [monitor://C:ProgramData\'Endpoint Security'\logs\OnDemandScan_Activity.log] disable=0 index=winlogs sourcetype=WinEventLog:AntiVirus start_from=0 current_only=0 checkpointInterval = 5 renderXml=false   My question is: Is the stanza written correctly? When I do a search I am not seeing anything.
where can I download Splunk Enterprise 9.2.2?  I have ver: 9.2.1 and it has a Vulnerability Here is the Description:  The version of Splunk installed on the remote host is prior to tested version. ... See more...
where can I download Splunk Enterprise 9.2.2?  I have ver: 9.2.1 and it has a Vulnerability Here is the Description:  The version of Splunk installed on the remote host is prior to tested version. It is, therefore, affected by a vulnerability as referenced in the SVD-2024-0703 advisory
Hi All, Hope this message finds you well. I have installed splunk on-prem on a linux box as a splunk user and have given proper permissions. The azure VM gets shutsdown automatically at around 11 ... See more...
Hi All, Hope this message finds you well. I have installed splunk on-prem on a linux box as a splunk user and have given proper permissions. The azure VM gets shutsdown automatically at around 11 pm everyday and there is no auto start. For time being we are manually starting the VM. My problem here is while installing the splunk instance, I have run the command enable boot-start and it was successful but the splunkd services does not start on its own.  Can anyone please suggest what can be done to fix it? Thanks in advance
I have a search that yields "message":"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0, cavity_temp: 257" I am trying to extract the bold value associated with fuel, the va... See more...
I have a search that yields "message":"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0, cavity_temp: 257" I am trying to extract the bold value associated with fuel, the value can be any number 0-1000 Using the field extractor I have gotten an unusable rex result:   rex message="^\{"\w+":\d+,"\w+_\w+":"[a-f0-9]+","\w+":"\w+_\w+","\w+_\w+":"\w+","\w+_\w+":"\w+","\w+":\{"\w+":"\w+","\w+":"\w+","\w+":\d+\.\d+,"\w+":\-\d+\.\d+,"\w+":"\w+"\},"\w+_\w+":"\w+","\w+":"\w+::\w+_\w+_\w+:\s+\w+:\s+\d+,\s+\w+:\s+\d+,\s+\w+_\w+:\s+\d+,\s+\w+:\s+\w+,\s+\w+:\s+(?P<fuel_level>\d+)" When trying to search with this, the next command does not work and my result yields: Invalid search command 'a' Can someone give me usable rex to get the highlighted number in a field titled 'fuel_level'
Hope someone can assist.  My client needs to be able to read word and other binary files from a dashboard without importing them into Splunk.  He has a fileshare where they store the documents and wo... See more...
Hope someone can assist.  My client needs to be able to read word and other binary files from a dashboard without importing them into Splunk.  He has a fileshare where they store the documents and would like to read the share and have the list show in the dashboard and be able to click on a document from the file share and view the file in it's native application.  Is there a way to do this with Splunk?  
I am exceeding my 5GB license. I have determine the problem by doing a 24 hour search using the following: index="winlogs" host=filesvr souce="WinEventLog:Security" EventCode=4663 Accesses="ReadDat... See more...
I am exceeding my 5GB license. I have determine the problem by doing a 24 hour search using the following: index="winlogs" host=filesvr souce="WinEventLog:Security" EventCode=4663 Accesses="ReadData (or ListDirectory) Security_ID="NT AUTHORITY\SYSTEM" The above search returns 4.5 million plus records. My question is how do I stop Splunk from ingesting     Security_ID="NT AUTHORITY\SYSTEM" of EventCode 4663? Would appreciate any assistance\suggestions given.
Hi everyone! This issue is exclusively for splunk universal forwarder v9.2.1 .. what’s happening here is the script  is dumping yum updates check on satellite thereby filling all the space in the ser... See more...
Hi everyone! This issue is exclusively for splunk universal forwarder v9.2.1 .. what’s happening here is the script  is dumping yum updates check on satellite thereby filling all the space in the servers. When checked ingernal logs, it seems the update.sh is installing the older version of these satellite linux packages and then throwing message as message from "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/update.sh" Not using downloaded (satellite package name like rhel..blah..blah) because it is older than what we have: Have any  one faced this particular issue? I am not able to understand why is the update.sh trying to install the older packages in the first place ... Can anyone suggest what can be done to resolve it? Thanks. 
I am trying to query our windows and linux indexes to verify how many times a user has logged in over a period of time.   Currently, I only care about the last 7 days. I've tried to run some querie... See more...
I am trying to query our windows and linux indexes to verify how many times a user has logged in over a period of time.   Currently, I only care about the last 7 days. I've tried to run some queries, but it's not very fruitful.   Can I gain some assistance with generating a query for determining the number of logins over a period of time, please?   Thank you.
Is it possible to use a lookup file in the Noteble Event supression say to look up a list of assets/enviroments that we do/don't want to know about?