All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Do any versions of splunk and splunk products utilize python-werkzeug?
Hi, I have two fields, where time zone seems to be different.. please could you help me to get difference ?  itime= 2024-02-22 20:56:02,185 stime=2024-02-23T01:56:02Z I tried the below but it... See more...
Hi, I have two fields, where time zone seems to be different.. please could you help me to get difference ?  itime= 2024-02-22 20:56:02,185 stime=2024-02-23T01:56:02Z I tried the below but it always gives around 5 hrs delay.. SPL: | eval itime=strptime(Initial_Time,"%Y-%m-%d %H:%M:%S,%3N") | eval stime=strptime(s_time,"%Y-%m-%dT%H:%M:%S") | eval difference = abs(itime - stime) | eval time_difference=tostring(difference, "duration")
Hello, I have requirement to get logs into Splunk from Snowflake. I have no idea where to start from. I came across Splunk docs using Splunk DB connect.https://docs.splunk.com/Documentation/DBX/3.1... See more...
Hello, I have requirement to get logs into Splunk from Snowflake. I have no idea where to start from. I came across Splunk docs using Splunk DB connect.https://docs.splunk.com/Documentation/DBX/3.15.0/DeployDBX/Installdatabasedrivers  Can you'll guide me how do i get started here? How do I get logs from Snowflake into Splunk. Can I use HEC token to- get the logs?
Hello   What's the officall Limit of Query Results in Splunk? Is this also written somewhere on the Splunk Website? kind regards Christian
Hi Team, As we need to monitor memory by process for each windows hosts. As checked we couldn't find any processes for memory and instances value is showing only "0" but we are monitoring CPU by p... See more...
Hi Team, As we need to monitor memory by process for each windows hosts. As checked we couldn't find any processes for memory and instances value is showing only "0" but we are monitoring CPU by process for hosts, it contains instance field and we could see the processes name for CPU. We verified the inputs.conf stanzas everything is looking good need your help to configure this memory processes in the instances field. Below are query used to populate CPU utilization by process index=perfmon source="Perfmon:Process" counter="% Processor Time" instance!=_total instance!=idle | timechart limit=20 useother=f span=10m max(Value) as cputime by instance So, we need same for memory utilization by process, and the counter = "% Committed Bytes In Use" Thanks
  Hello Splunkers!! I want us to configure Active Directory in Splunk with LDAP. My Splunk server and domain controller are two different servers on the same network. Please guide me on what st... See more...
  Hello Splunkers!! I want us to configure Active Directory in Splunk with LDAP. My Splunk server and domain controller are two different servers on the same network. Please guide me on what steps I need to follow. 1. Shall I open Inbound or outbound  port  389 on both the servers ? 2. How to create user and user group in Active directory. 3. After the mapping of LDAP, does it impact the current existing Splunk users ? 4. Please provide me  document if anybody performed POC on this already.    
Hi, Does SPLUNK have  document template in capturing Dashboard requirements from users?  Below points is request to the user .what they are looking for in a Dashboard.Is any other point to be add.ne... See more...
Hi, Does SPLUNK have  document template in capturing Dashboard requirements from users?  Below points is request to the user .what they are looking for in a Dashboard.Is any other point to be add.need a sample template. 1. Identify stakeholders’ goals to suggest dashboard metrics ... 2. Take a questions-first approach ... 3. Understand how end users interact with the dashboard ... 4. Identify which KPIs are the most important ... 5. Create a step-by-step workflow ... 6. Collaborate on the dashboard layout in advance ... 7. Choose the dashboard type before creating it ... 8. Repurpose analyzed data to create useful dashboards ...   Thanks, Karthi
Hi all, I have one lookup which was having around 1000 entries recently someone has updated the lookup and all entries got deleted. How can i know who has updated the lookup?
Hello experts... I need help... I want to fetch disabled AD account users... Can someone share splunk query for the same.
Hello experts... I need help... I want to fetch Azure orphaned disk details... Can someone share splunk query for the same.
I downloaded and installed these apps from Splunkbase. https://splunkbase.splunk.com/app/4232 https://splunkbase.splunk.com/app/2642 As per the instructions, I added the sourcetype=linux_... See more...
I downloaded and installed these apps from Splunkbase. https://splunkbase.splunk.com/app/4232 https://splunkbase.splunk.com/app/2642 As per the instructions, I added the sourcetype=linux_audit to the local "auditd_events" eventtype in TA and linux_audit to list of sourcetypes in TA-linux_auditd/lookups/auditd_sourcetypes.csv but the dashboard data is not showing up. My existing auditd events belong to the different sourcetype names and eventtype names.  For example,  I got the auditd events. index="linux_fw" sourcetype="syslog" eventtype="mycustom_audit_events" Therefore,  Do I need to add the sourcetype="syslog" to the local "auditd_events" eventtype in TA and add the syslog to list of sourcetypes in TA-linux_auditd/lookups/auditd_sourcetypes.csv ??
Hello experts... I need help... I want to fetch Azure snapshot details... I want active snapshots only... I don't need snapshot details which were deleted. Can someone share splunk query for the same.
Advanced Bot Detected on Imperva WAF   Backdoor Detected on Imperva WAF  Bot Access Control Detected on Imperva WAF  Can anyone help me to find custom search queries for the above use cases? 
Hi  I have query that return count of different resp codes of servers for 2 days now need to find different between these two days. current output: Respcodes    Srv1      Srv2       Srv3       Sr... See more...
Hi  I have query that return count of different resp codes of servers for 2 days now need to find different between these two days. current output: Respcodes    Srv1      Srv2       Srv3       Srv4  …. 200                    80           10          100       42 400                    12           55            11         0 500                     11           34             2          8 … expected output: Date.                  Respcodes    Srv1      Srv2       Srv3       Srv4  …. 2024/02/23  200                    80           10          100       42 2024/02/24  200                    70           19            11        11 2024/02/23  400                    12           55            11         0 2024/02/24  400                    44           14            46         89 2024/02/23   500                    11           34             2          8 2024/02/24   500                     11           34             2          9               …       if there is delta that calculate count of each server for two dates will be great! any idea? thanks
I updated etc/apps/custom_app/local/data/ui/nav/default.xml with a new sub-menu choice that should display saved searches based on the name pattern: <collection label="Menu Choice">     <saved so... See more...
I updated etc/apps/custom_app/local/data/ui/nav/default.xml with a new sub-menu choice that should display saved searches based on the name pattern: <collection label="Menu Choice">     <saved source="unclassified" match="Pattern" /> </collection> where "Pattern" is some specific pattern which is in the name of 17 saved searches -- reports and alerts. The alerts are scheduled and reports are not. All the alerts and reports are shared in App, not Private. This newly added menu choice doesn't appear in the menu. The menu has many other working menu choices displaying dashboards or other saved searches. What could be the reason? I tried reloading the app and changing the ownership of one of the alerts and reports but it didn't help. I am logged in as admin. Thank you.
Hi, I am writing a query in Splunk dashboard and the query return in base search it has multiple joint query. still the page is loading very slow. need to improve performance of dashboard query. Th... See more...
Hi, I am writing a query in Splunk dashboard and the query return in base search it has multiple joint query. still the page is loading very slow. need to improve performance of dashboard query. This my the query. index="Test" applicationName="sapi" timestamp log.map.correlationId level message ("Ondemand Started*" OR "Process star | rex field-message max_match=0 "\"Ondemand Started for. filename: (?<OnDemandFileName> [^\n]\w+\S+)" | rex field-message max_match=0 "Process started for (?<FileName>[^\n]+)" Ieval OnDemandFileName=rtrim(OnDemandFileName, "Job") Ieval "FileName/JobName"= coalesce (OnDemandFileName, FileName) | rename timestamp as Timestamp log.map.correlationId as CorrelationId level as Level message as Message eval JobType=case (like( 'Message', "%Ondemand Started%"), "OnDemand", like('Message", "Process started%"), "Scheduled", true (), "Unknown") eval Message=trim(Message, "\"") table Timestamp CorrelationId Level JobType "FileName/JobName" Message join CorrelationId type=left [search index="Test" applicationName="sapi" level=ERROR | rename log.map.correlationId as CorrelationId level as Level message as Messagel I dedup CorrelationId | table CorrelationId Level Message1] | table Timestamp CorrelationId Level JobType "FileName/JobName" Messagel I join CorrelationId type=left 20 [ search index="Test" applicationName="sapi" message="*file archived successfully *" | rex field-message max_match=0 "\"Concur file archived successfully for file name: (?<Archived FileName>[^\n]\w+\S+)" Ieval Archived FileName=rtrim(Archived FileName,"\"") I rename log.map.correlationId as CorrelationId | table CorrelationId ArchivedFileName] 1 table Timestamp CorrelationId Level JobType "FileName/JobName" ArchivedFileName Message1 join CorrelationId type=left [ search index="Test" applicationName="sapi" (log.map.processor Path=ExpenseExtractProcessingtoOraclex AND (" Import*" OR "APL Import*")) | rename timestamp as Timestamp1 log.map.correlationId as CorrelationId level as Level message as Message | eval Status-case (like('Message", "%GL Import flow%"), "SUCCESS", like('Message", "%APL Import flow%"), "SUCCESS", like('Level', "%Exception%"), "ERROR") | rename Message as Response | table Timestamp1 CorrelationId Status Response] Ieval Status=if (Level="ERROR", "ERROR", Status) Ieval StartTime=round(strptime(Timestamp, "%Y-%m-%dT%H:%M: %S.%QZ")) | eval EndTime=round(strptime (Timestamp1, "%Y-%m-%dT%H:%M: %S.%QZ")) Ieval Elapsed TimeInSecs-EndTime-StartTime | eval. "Total Elapsed Time"=strftime (Elapsed TimeInSecs, "%H:%M:%S") eval Response= coalesce (Response, Message1)|table Status CorrelationId ArchivedFileName] 1 table Timestamp CorrelationId Level JobType "FileName/JobName" ArchivedFileName |search Status=*|stats count by JobType
Hey there! I've set up Splunk Enterprise using AWS AMI. Now, I'm attempting to install the Splunk Essentials app, but I'm running into some issues. First, when I tried to upload the .tgz zip file, ... See more...
Hey there! I've set up Splunk Enterprise using AWS AMI. Now, I'm attempting to install the Splunk Essentials app, but I'm running into some issues. First, when I tried to upload the .tgz zip file, it got blocked. Then, I attempted to install it through the marketplace, but my correct username and password from splunk.com aren't working. I'm not sure how to fix this. Any help would be appreciated. Thanks!
HI All,   We are trying to install Splunk through the chef script but the installation is stuck and timeout after 20 min. Command We ran as given below /opt/splunkforwarder/bin/splunk enable boot... See more...
HI All,   We are trying to install Splunk through the chef script but the installation is stuck and timeout after 20 min. Command We ran as given below /opt/splunkforwarder/bin/splunk enable boot-start --accept-license --no-prompt --answer-yes  When the Splunk installation script runs on the instance, it always hangs the first time, as the first screenshot. It will then work again if the command runs again in subsequent run. As shown in the 2nd screenshot. Note: After the 1st time it ran, the CPU went to 100%, and the Splunk process next existed. First Run:     Second Run:      
In the current project, we are sending application logs to Splunk, while the splunk-otel-collector is responsible for sending instrumentation logs to SignalFx. The issue arises because we utilize ... See more...
In the current project, we are sending application logs to Splunk, while the splunk-otel-collector is responsible for sending instrumentation logs to SignalFx. The issue arises because we utilize the cloudFrontID as a correlation ID to filter logs in Splunk, whereas SignalFx employs the traceId for log tracing. I am currently facing challenges in correlating the application logs' correlation ID with SignalFx's traceId. I attempted to address this issue by using the "Serilog.Enrichers.Span" NuGet package to log the TraceId and SpanId. However, no values were logged in Splunk. How can I access the TraceId generated by the OpenTelemetry Collector within the ASP.NET web application (Framework version: 4.7.2)? Let me know if further details are required from my end.
Hello All, Logs are not indexing into splunk. My configurations are below  inputs.conf: [monitor:///usr/logs/Client*.log*] index = admin crcSalt = <SOURCE> disabled = false recursive = false ... See more...
Hello All, Logs are not indexing into splunk. My configurations are below  inputs.conf: [monitor:///usr/logs/Client*.log*] index = admin crcSalt = <SOURCE> disabled = false recursive = false props.conf: [source::(...(usr/logs/Client*.log*))] sourcetype = auth_log My logs files pattern: Client_11.186.145.54:1_q1234567.log Client_11.186.145.54:1_q1234567.log.~~ Client_12.187.146.53:2_s1234567.log Client_12.187.146.53:2_s1234567.log.~~ Client_1.1.1.1:2_p1244567.log Client_1.1.1.1:2_p1244567.log.~~ In some of log files it starts with below line: ===== JLSLog: Maximum log file size is 5000000 and then log events So for this one i tried with below config one by one but nothing worked out adding crcSalt=<SOURCE> in monitor stanze, tried with adding SEDCMD in props.conf SEDCMD-removeheadersfooters=s/\=\=\=\=\=\sJLSLog:\s((Maximum\slog\sfile\ssize\sis\s\d+)|Initial\slog\slevel\sis\sLow)//g and tried with regex in transforms.conf transforms.conf [ignore_lines_starting_with_equals] REGEX = ^===(.*) DEST_KEY = queue FORMAT = nullQueue props.conf: [auth_log] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)=== TRANSFORMS-null = ignore_lines_starting_with_equals When i checked in splunkd logs there is no error captured and in list inputstatus it is showing                  percent = 100.00                 type = finished reading / open file please help me out of this issue if anyone faced before and fixed it. but the weird scenario is sometimes only  the first line of log file is indexed  ===== JLSLog: Maximum log file size is 5000000 host/server details: os: Solaris 10 splunk universal forwarder version 7.3.9 splunk enterprise version: 9.1.1 Here restriction is the host os cant be upgraded as of now so i need to strict on 7.3.9 splunk forwarder version.