All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello  I work for a company with max 12 workstations to monitor, and we only want to log critical logs from these stations. Is Splunk Free a good option?
Hi All, we wanted to upgrade Splunk Enterprise clustered environment from 8.2.2  version to 8.2.6 and have this question running in my mind what splunk precedence rules should be followed to upgrade... See more...
Hi All, we wanted to upgrade Splunk Enterprise clustered environment from 8.2.2  version to 8.2.6 and have this question running in my mind what splunk precedence rules should be followed to upgrade splunk clustered environment.   Can anyone guide me on what order should we need to upgrade the splunk instances. Sequence of order  1) Cluster Master 2) Indexer Peers 3) Search head captain  4) Search peers 5) Deployer  6) Deployment Server  7) Heavy Forwarders UF  Back-up SPLUNK_HOME/etc. 
We use the splunk search endpoint to get notable events using the search endpoint services/search/jobs search=search `notable` earliest_time=(currentTime - 2min) latest_time=(currentTime) adho... See more...
We use the splunk search endpoint to get notable events using the search endpoint services/search/jobs search=search `notable` earliest_time=(currentTime - 2min) latest_time=(currentTime) adhoc_search_level=smart When search is completed services/search/jobs/<sid> dispatchState = DONE We get results services/search/jobs/<sid>/results We don't get all the results. But when we make the same search with same time ranges around 10 to 15 mins later, we get the results which we missed in the realtime search. Why do we get the issue and how do we resolve the issue ?
Hello Splunkers, I am currently using a F5 load balancer  in front of two HFs that are used as intermediate forwarders and also doing the parsing jobs for incoming data.  I would like to create (... See more...
Hello Splunkers, I am currently using a F5 load balancer  in front of two HFs that are used as intermediate forwarders and also doing the parsing jobs for incoming data.  I would like to create (index time) a new field for all logs passing through my HF that can indicate which HF has done the job.  In other words, I want to keep a trace of which HFs was choose for each logs. I suppose I need to use a props.conf file but I do not where to place it and I do not know how to dynamically set a field = hostname of my machine. I am using a DS to deploy apps on my HFs  by the way and I would like to avoid any custom / manual config on each HF.  Thanks a lot, GaetanVP    
I have 2 index, abc and bcz index abc data is in raw format like below. <random ip address>|-NA\CAPITA|5xxhxh545|jljdjhsdhj78987|hkjhkdjfkjfd5672v2hg7|87675678vf6x_ <random date time> "GET http:... See more...
I have 2 index, abc and bcz index abc data is in raw format like below. <random ip address>|-NA\CAPITA|5xxhxh545|jljdjhsdhj78987|hkjhkdjfkjfd5672v2hg7|87675678vf6x_ <random date time> "GET http:\\at-abc.com http/1.1" 500 <random values> I want to pull 87675678vf6x_ as field1 at-abc.com as field2 and 500 as field3. index bcz got formatted data. I now want to compare both indexes with field 1 of index abc with another field7 in bcz where bcz field5="name" and return field1 field2 and field3. It looks simple but not working.  
So after searching here it seems like a lot of people have trouble parsing/handling WinEventLogs. I want to ask if there is no better way than custom transforms and props? That might be a debatable... See more...
So after searching here it seems like a lot of people have trouble parsing/handling WinEventLogs. I want to ask if there is no better way than custom transforms and props? That might be a debatable question to some so I'll be more targeted. I'm trying to extract parts of the Message field, here's a sanitized example:       <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{xxxxx-xxxx-4994-A5BA-3E3B0328C30D}'/><EventID>4624</EventID><Version>2</Version><Level>0</Level><Task>12544</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2023-01-25T22:35:16.209857600Z'/><EventRecordID>840762295</EventRecordID><Correlation ActivityID='{D610E4E9-2C97-0000-12E5-10D6972CD901}'/><Execution ProcessID='704' ThreadID='2404'/><Channel>Security</Channel><Computer>dc01.domain.net</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>DOMAIN\okta_service</Data><Data Name='SubjectUserName'>okta_service</Data><Data Name='SubjectDomainName'>DOMAIN</Data><Data Name='SubjectLogonId'>0x31198f</Data><Data Name='TargetUserSid'>DOMAIN\Bob.Saget</Data><Data Name='TargetUserName'>bob.saget</Data><Data Name='TargetDomainName'>DOMAIN</Data><Data Name='TargetLogonId'>0x1578a0a1</Data><Data Name='LogonType'>3</Data><Data Name='LogonProcessName'>Advapi </Data><Data Name='AuthenticationPackageName'>Negotiate</Data><Data Name='WorkstationName'>DC01</Data><Data Name='LogonGuid'>{xxxxx-xx-D725-309C-788D104F655D}</Data><Data Name='TransmittedServices'>-</Data><Data Name='LmPackageName'>-</Data><Data Name='KeyLength'>0</Data><Data Name='ProcessId'>0x1658</Data><Data Name='ProcessName'>C:\Program Files (x86)\Okta\Okta AD Agent\OktaAgentService.exe</Data><Data Name='IpAddress'>1.2.3.4</Data><Data Name='IpPort'>-</Data><Data Name='ImpersonationLevel'>%%1833</Data><Data Name='RestrictedAdminMode'>-</Data><Data Name='TargetOutboundUserName'>-</Data><Data Name='TargetOutboundDomainName'>-</Data><Data Name='VirtualAccount'>%%1843</Data><Data Name='TargetLinkedLogonId'>0x0</Data><Data Name='ElevatedToken'>%%1842</Data></EventData></Event>             Namely at the bottom,   <Data Name='IpAddress'>1.2.3.4</Data> Now Im still using renderXml = true because when you look at the raw type.. that Message field is just so huge and practically impossible to define a field to me. Unless I'm wrong? Also as per my inputs file, the sourcetype for this is 'generic_single_line'. Now I've tried Regex, and Delimiters and both give me errors about either selecting too many fields, or in the case of Delimiters (when I attempts to specify other and '<>', an entirely unholy wall-of-text which this tiny blurb at the end:   has exceeded the configured depth_limit, consider raising the value in limits.conf.   Or Im going about this all wrong and raw is the easiest to deal with? Any help would be greatly appreciated!
Currently running Splunk Universal Forwarder version 9.0.3. Looking to ignore Windows event logs (EventCode = 4103) using a "blacklist" approach as part of my overall inputs.conf configuration.  Wh... See more...
Currently running Splunk Universal Forwarder version 9.0.3. Looking to ignore Windows event logs (EventCode = 4103) using a "blacklist" approach as part of my overall inputs.conf configuration.  While the splunkd.log is not throwing any errors with my current attempts, it is also not ignoring logs containing the string:  String: to Ignore: C:\WINDOWS\CCM\SystemTemp.   Note: I am choosing to filter on the string above as other aspects can vary and this is the common string that is included in the events I want to ignore.  Below an example of such log.  Please advise.   My attempt at this is:    blacklist1 = EventCode="4103" Message="(?:Host Application =)\s+(?:.*WINDOWS\\CCM\\SystemTemp\\+.*)"       User=SYSTEM Sid=S-1-5-18 SidType=1 SourceName=Microsoft-Windows-PowerShell Type=Information RecordNumber=10132121 Keywords=None TaskCategory=Executing Pipeline OpCode=To be used when operation is just executing a method Message=CommandInvocation(Out-Default): "Out-Default" Context: Severity = Informational Host Name = ConsoleHost Host Version = 5.1.19041.2364 Host ID = 5009593d-812d-49fc-a794-4633cf58cd5c Host Application = C:\WINDOWS\system32\WindowsPowerShell\v1.0\PowerShell.exe -NoLogo -Noninteractive -NoProfile -ExecutionPolicy Bypass & 'C:\WINDOWS\CCM\SystemTemp\7f1a326f-19f5-4480-9414-46ffe015e730.ps1' Engine Version = 5.1.19041.2364    
I'm having an issue where db connect is reading the whole database every hour and also logging duplicate events instead of reading new events. So yes I have up to 10-20 of the same event logging into... See more...
I'm having an issue where db connect is reading the whole database every hour and also logging duplicate events instead of reading new events. So yes I have up to 10-20 of the same event logging into Splunk. Would adjusting the execution frequency solved this issue?
Query: index=apl-app-grap  sourcetype=grap:apps source=*applications*  host=xxxxxx |rex field =_raw "\|rank\:(?<Report>.*?)\|" |eval Pass=if(Report="0", "Pass", null()) |stats count(Pass) as Pass... See more...
Query: index=apl-app-grap  sourcetype=grap:apps source=*applications*  host=xxxxxx |rex field =_raw "\|rank\:(?<Report>.*?)\|" |eval Pass=if(Report="0", "Pass", null()) |stats count(Pass) as Passed_Count  output: Passed_Count 700 But i need the output for day wise, suppose if i select  7 days i should get 7 rows (showing each day count) like shown below: Passed_Count 100 100 100 100 100 100 100    
Hi all when i run my original query i am getting one result and when i execute the same query using tstats i am getting different output. AVG IS NOT MATCHING. how to modify the query to match the ... See more...
Hi all when i run my original query i am getting one result and when i execute the same query using tstats i am getting different output. AVG IS NOT MATCHING. how to modify the query to match the count. my original query:     index=apl-cly-sap sourcetype=cly:app:sap |search processName="applicationstatus" |stats avg(plantime)     output: 1233.43223454   tstats query:     |tstats count where index=apl-cly-sap sourcetype=cly:app:sap TERM(processName=applicationstatus) by PREFIX(plantime=) |rename plantime= as Time |stats avg(Time)     output: 1345.7658755
In my team we have completed a Jenkins + splunk installation. So far we can see all the logs that comes from Jenkins job logs.  Is it possible to send jenkins custom loggers into splunk (Jenkins C... See more...
In my team we have completed a Jenkins + splunk installation. So far we can see all the logs that comes from Jenkins job logs.  Is it possible to send jenkins custom loggers into splunk (Jenkins Custom logger example https://docs.cloudbees.com/docs/cloudbees-ci-kb/latest/client-and-managed-masters/how-do-i-create-a-logger-in-jenkins-for-troubleshooting-and-diagnostic-information)? custom loggers are already in disk in:  `jenkins_home/log` path Thank you so much for your help!
I have a list of chrome extensions that are installed that is returned in a multivalue field. One of the results looks like this:  All I really care about is the extension name so I was able to... See more...
I have a list of chrome extensions that are installed that is returned in a multivalue field. One of the results looks like this:  All I really care about is the extension name so I was able to run this query to use rex to extract the names of all the extensions:  index=jamf source=jss_inventory "extensionAttribute.name"="Installed Chrome Extensions & Versions" | fields extensionAttribute.name, computer_meta.assignedUser | rex field=extensionAttribute.value max_match=0 "Name: (?<extensions>.*)\n" | table extensions This returns:  How can I further extract these extensions in this multi-value field.  I can't get mvexpand to work because it says that the new extensions field I created doesn't exist in the data. I can't figure out how to extract each line as a separate result so that I can dedup and get a full list of all installed extensions. 
Hello everyone, I am new with SNOW TA and I am trying to find how can I schedule polling time. I have the collection interval set to poll once a day, but I would like to schedule it to run some time ... See more...
Hello everyone, I am new with SNOW TA and I am trying to find how can I schedule polling time. I have the collection interval set to poll once a day, but I would like to schedule it to run some time at night.
Hi guys,   Do we have an option to store data forever in either of buckets (warm or cold) for particular index ?   If yes can some share the stanza .
Hi, I have a bar graph which is not showing full y-axis value when the value have more words. Is there a way we can force dashboard to display full value   Example: Instead of "Monthly Patch Up... See more...
Hi, I have a bar graph which is not showing full y-axis value when the value have more words. Is there a way we can force dashboard to display full value   Example: Instead of "Monthly Patch Update" it is displaying as "Month...pdate"
Hi, I have below search, i have clubbed 3 searches into 1. Each individual search is working fine but when i clubbed its not able to pull data from previous year and the table shows empty values fo... See more...
Hi, I have below search, i have clubbed 3 searches into 1. Each individual search is working fine but when i clubbed its not able to pull data from previous year and the table shows empty values fore few months   index=dev AND "alpha" | dedup _time| eval Month=strftime(_time,"%m %b %Y")|stats count by Month| rename count as alpha | appendcols [search index=DEV AND "[beta]" | dedup _time|eval Month=strftime(_time,"%m %b %Y")|stats count by Month| rename count as beta] | appendcols [search index=dev AND "gamma" | dedup _time| eval Month=strftime(_time,"%m %b %Y")|stats count by Month| rename count as gamma]  
Query doesnt bring up anything. Try to pull RDP connections in my environment:      event_simpleName=UserLogon LogonType_decimal=10 | stats values(UserName) dc(UserName) AS "User Count" coun... See more...
Query doesnt bring up anything. Try to pull RDP connections in my environment:      event_simpleName=UserLogon LogonType_decimal=10 | stats values(UserName) dc(UserName) AS "User Count" count(UserName) AS "Logon Count" by aid, ComputerName | sort - "Logon Count"  
Hello,    we have a license Splunk, and running into the below error restricting search to internal indexes only (reason: [DISABLED_DUE_TO_VIOLATION,0]) how do i contact the Splunk team to get ... See more...
Hello,    we have a license Splunk, and running into the below error restricting search to internal indexes only (reason: [DISABLED_DUE_TO_VIOLATION,0]) how do i contact the Splunk team to get by license reset temporarily? I do not have the email or login portal of the account holder who initially created the Splunk cluster to contact the support
Getting the following error when trying to configure this on the on-prem instance of Splunk:   Does anyone have an answer as to why this is happening?   
Hello everyone,  I have a question for you, and I need your help please I have some logs, but the parsing isn't done.  In a same log, I have a lot of indicators and I need to extract the fie... See more...
Hello everyone,  I have a question for you, and I need your help please I have some logs, but the parsing isn't done.  In a same log, I have a lot of indicators and I need to extract the fields : -cpu_model - device_type: -distinguished_name: - entity:  - last_boot_duration:  - last_ip_address:  - last_logon_duration:  - last_logon_time:  -   last_system_boot:     -  mac_addresses: [ 00:42:38:CA:81:72 00:42:38:CA:81:7300:42:38:CA:81:76          02:42:38:CA:81:72          74:78:27:91:41:BB          B0:9F:80:55:40:44         ]       - name: PCW-TOU-76566        -number_of_days_since_last_boot:       - number_of_days_since_last_logon:       -  number_of_monitors: 3        - os_version_and_architecture: Windows 10 Pro 21H2 (64 bits)       - platform: windows       - score:Device performance/Boot speed: null        -system_drive_capacity: 506333229056      -  system_drive_usage: 0.19       - total_nonsystem_drive_capacity: 0        -total_nonsystem_drive_usage: null        -total_ram: 8589934592   The log is like this : What can I do to have the fields extracted to develop my indicators ?  The regex method is not possible in this case, can I use rex command ? and how I can do for this example ?  I need your help, thank you so much