All Topics

Top

All Topics

When searching in metrics.log for the indexers in SplunkCloud I'm seeing the following: group=pipeline, name=typing, processor=regexreplacement, cpu_seconds=0.002, executes=838, cumulative_hits=1037... See more...
When searching in metrics.log for the indexers in SplunkCloud I'm seeing the following: group=pipeline, name=typing, processor=regexreplacement, cpu_seconds=0.002, executes=838, cumulative_hits=10378371, in=113716, out.splunk=111870, out.drop=1846 What is out.drop telling me here?  Am I losing data and what do I need to configure to not lose data?
Hello everyone, I am looking for a SPL-solution to determine how long the longest common substring of two strings is. Is there any built-in way to do that? Or is there any app that provides me with... See more...
Hello everyone, I am looking for a SPL-solution to determine how long the longest common substring of two strings is. Is there any built-in way to do that? Or is there any app that provides me with some command for that?   Thanks in Advance!
I have Log Analytics deployed through the agent machine using JOBs and I parse it through the grok expression. However, I noticed that I also receive data in the database that clearly do not match, w... See more...
I have Log Analytics deployed through the agent machine using JOBs and I parse it through the grok expression. However, I noticed that I also receive data in the database that clearly do not match, which means that they do not have an ERROR logLevel. Which I don't want to parse into columns, but I don't even want to have them in the database due to capacity. grok patterns: - "%{TIMESTAMP_ISO8601:logEventTimestamp}%{SPACE}\\[%{NUMBER:logLevelId}\\]%{SPACE}%{LOGLEVEL:logLevel}%{SPACE}-%{SPACE}%{GREEDYDATA:msg}" pattern.grok: LOGLEVEL ([Ee]rr?(?:or)?|ERR?(?:OR)?) Requered data: Unnecessary data:  I would be interested in how to get rid of them, or where they can be used clause where or a filter?
Do any versions of splunk and splunk products utilize python-werkzeug?
Hi, I have two fields, where time zone seems to be different.. please could you help me to get difference ?  itime= 2024-02-22 20:56:02,185 stime=2024-02-23T01:56:02Z I tried the below but it... See more...
Hi, I have two fields, where time zone seems to be different.. please could you help me to get difference ?  itime= 2024-02-22 20:56:02,185 stime=2024-02-23T01:56:02Z I tried the below but it always gives around 5 hrs delay.. SPL: | eval itime=strptime(Initial_Time,"%Y-%m-%d %H:%M:%S,%3N") | eval stime=strptime(s_time,"%Y-%m-%dT%H:%M:%S") | eval difference = abs(itime - stime) | eval time_difference=tostring(difference, "duration")
Hello, I have requirement to get logs into Splunk from Snowflake. I have no idea where to start from. I came across Splunk docs using Splunk DB connect.https://docs.splunk.com/Documentation/DBX/3.1... See more...
Hello, I have requirement to get logs into Splunk from Snowflake. I have no idea where to start from. I came across Splunk docs using Splunk DB connect.https://docs.splunk.com/Documentation/DBX/3.15.0/DeployDBX/Installdatabasedrivers  Can you'll guide me how do i get started here? How do I get logs from Snowflake into Splunk. Can I use HEC token to- get the logs?
Hello   What's the officall Limit of Query Results in Splunk? Is this also written somewhere on the Splunk Website? kind regards Christian
Hi Team, As we need to monitor memory by process for each windows hosts. As checked we couldn't find any processes for memory and instances value is showing only "0" but we are monitoring CPU by p... See more...
Hi Team, As we need to monitor memory by process for each windows hosts. As checked we couldn't find any processes for memory and instances value is showing only "0" but we are monitoring CPU by process for hosts, it contains instance field and we could see the processes name for CPU. We verified the inputs.conf stanzas everything is looking good need your help to configure this memory processes in the instances field. Below are query used to populate CPU utilization by process index=perfmon source="Perfmon:Process" counter="% Processor Time" instance!=_total instance!=idle | timechart limit=20 useother=f span=10m max(Value) as cputime by instance So, we need same for memory utilization by process, and the counter = "% Committed Bytes In Use" Thanks
  Hello Splunkers!! I want us to configure Active Directory in Splunk with LDAP. My Splunk server and domain controller are two different servers on the same network. Please guide me on what st... See more...
  Hello Splunkers!! I want us to configure Active Directory in Splunk with LDAP. My Splunk server and domain controller are two different servers on the same network. Please guide me on what steps I need to follow. 1. Shall I open Inbound or outbound  port  389 on both the servers ? 2. How to create user and user group in Active directory. 3. After the mapping of LDAP, does it impact the current existing Splunk users ? 4. Please provide me  document if anybody performed POC on this already.    
Hi, Does SPLUNK have  document template in capturing Dashboard requirements from users?  Below points is request to the user .what they are looking for in a Dashboard.Is any other point to be add.ne... See more...
Hi, Does SPLUNK have  document template in capturing Dashboard requirements from users?  Below points is request to the user .what they are looking for in a Dashboard.Is any other point to be add.need a sample template. 1. Identify stakeholders’ goals to suggest dashboard metrics ... 2. Take a questions-first approach ... 3. Understand how end users interact with the dashboard ... 4. Identify which KPIs are the most important ... 5. Create a step-by-step workflow ... 6. Collaborate on the dashboard layout in advance ... 7. Choose the dashboard type before creating it ... 8. Repurpose analyzed data to create useful dashboards ...   Thanks, Karthi
Hi all, I have one lookup which was having around 1000 entries recently someone has updated the lookup and all entries got deleted. How can i know who has updated the lookup?
Hello experts... I need help... I want to fetch disabled AD account users... Can someone share splunk query for the same.
Hello experts... I need help... I want to fetch Azure orphaned disk details... Can someone share splunk query for the same.
I downloaded and installed these apps from Splunkbase. https://splunkbase.splunk.com/app/4232 https://splunkbase.splunk.com/app/2642 As per the instructions, I added the sourcetype=linux_... See more...
I downloaded and installed these apps from Splunkbase. https://splunkbase.splunk.com/app/4232 https://splunkbase.splunk.com/app/2642 As per the instructions, I added the sourcetype=linux_audit to the local "auditd_events" eventtype in TA and linux_audit to list of sourcetypes in TA-linux_auditd/lookups/auditd_sourcetypes.csv but the dashboard data is not showing up. My existing auditd events belong to the different sourcetype names and eventtype names.  For example,  I got the auditd events. index="linux_fw" sourcetype="syslog" eventtype="mycustom_audit_events" Therefore,  Do I need to add the sourcetype="syslog" to the local "auditd_events" eventtype in TA and add the syslog to list of sourcetypes in TA-linux_auditd/lookups/auditd_sourcetypes.csv ??
Hello experts... I need help... I want to fetch Azure snapshot details... I want active snapshots only... I don't need snapshot details which were deleted. Can someone share splunk query for the same.
Advanced Bot Detected on Imperva WAF   Backdoor Detected on Imperva WAF  Bot Access Control Detected on Imperva WAF  Can anyone help me to find custom search queries for the above use cases? 
Hi  I have query that return count of different resp codes of servers for 2 days now need to find different between these two days. current output: Respcodes    Srv1      Srv2       Srv3       Sr... See more...
Hi  I have query that return count of different resp codes of servers for 2 days now need to find different between these two days. current output: Respcodes    Srv1      Srv2       Srv3       Srv4  …. 200                    80           10          100       42 400                    12           55            11         0 500                     11           34             2          8 … expected output: Date.                  Respcodes    Srv1      Srv2       Srv3       Srv4  …. 2024/02/23  200                    80           10          100       42 2024/02/24  200                    70           19            11        11 2024/02/23  400                    12           55            11         0 2024/02/24  400                    44           14            46         89 2024/02/23   500                    11           34             2          8 2024/02/24   500                     11           34             2          9               …       if there is delta that calculate count of each server for two dates will be great! any idea? thanks
I updated etc/apps/custom_app/local/data/ui/nav/default.xml with a new sub-menu choice that should display saved searches based on the name pattern: <collection label="Menu Choice">     <saved so... See more...
I updated etc/apps/custom_app/local/data/ui/nav/default.xml with a new sub-menu choice that should display saved searches based on the name pattern: <collection label="Menu Choice">     <saved source="unclassified" match="Pattern" /> </collection> where "Pattern" is some specific pattern which is in the name of 17 saved searches -- reports and alerts. The alerts are scheduled and reports are not. All the alerts and reports are shared in App, not Private. This newly added menu choice doesn't appear in the menu. The menu has many other working menu choices displaying dashboards or other saved searches. What could be the reason? I tried reloading the app and changing the ownership of one of the alerts and reports but it didn't help. I am logged in as admin. Thank you.
Hi, I am writing a query in Splunk dashboard and the query return in base search it has multiple joint query. still the page is loading very slow. need to improve performance of dashboard query. Th... See more...
Hi, I am writing a query in Splunk dashboard and the query return in base search it has multiple joint query. still the page is loading very slow. need to improve performance of dashboard query. This my the query. index="Test" applicationName="sapi" timestamp log.map.correlationId level message ("Ondemand Started*" OR "Process star | rex field-message max_match=0 "\"Ondemand Started for. filename: (?<OnDemandFileName> [^\n]\w+\S+)" | rex field-message max_match=0 "Process started for (?<FileName>[^\n]+)" Ieval OnDemandFileName=rtrim(OnDemandFileName, "Job") Ieval "FileName/JobName"= coalesce (OnDemandFileName, FileName) | rename timestamp as Timestamp log.map.correlationId as CorrelationId level as Level message as Message eval JobType=case (like( 'Message', "%Ondemand Started%"), "OnDemand", like('Message", "Process started%"), "Scheduled", true (), "Unknown") eval Message=trim(Message, "\"") table Timestamp CorrelationId Level JobType "FileName/JobName" Message join CorrelationId type=left [search index="Test" applicationName="sapi" level=ERROR | rename log.map.correlationId as CorrelationId level as Level message as Messagel I dedup CorrelationId | table CorrelationId Level Message1] | table Timestamp CorrelationId Level JobType "FileName/JobName" Messagel I join CorrelationId type=left 20 [ search index="Test" applicationName="sapi" message="*file archived successfully *" | rex field-message max_match=0 "\"Concur file archived successfully for file name: (?<Archived FileName>[^\n]\w+\S+)" Ieval Archived FileName=rtrim(Archived FileName,"\"") I rename log.map.correlationId as CorrelationId | table CorrelationId ArchivedFileName] 1 table Timestamp CorrelationId Level JobType "FileName/JobName" ArchivedFileName Message1 join CorrelationId type=left [ search index="Test" applicationName="sapi" (log.map.processor Path=ExpenseExtractProcessingtoOraclex AND (" Import*" OR "APL Import*")) | rename timestamp as Timestamp1 log.map.correlationId as CorrelationId level as Level message as Message | eval Status-case (like('Message", "%GL Import flow%"), "SUCCESS", like('Message", "%APL Import flow%"), "SUCCESS", like('Level', "%Exception%"), "ERROR") | rename Message as Response | table Timestamp1 CorrelationId Status Response] Ieval Status=if (Level="ERROR", "ERROR", Status) Ieval StartTime=round(strptime(Timestamp, "%Y-%m-%dT%H:%M: %S.%QZ")) | eval EndTime=round(strptime (Timestamp1, "%Y-%m-%dT%H:%M: %S.%QZ")) Ieval Elapsed TimeInSecs-EndTime-StartTime | eval. "Total Elapsed Time"=strftime (Elapsed TimeInSecs, "%H:%M:%S") eval Response= coalesce (Response, Message1)|table Status CorrelationId ArchivedFileName] 1 table Timestamp CorrelationId Level JobType "FileName/JobName" ArchivedFileName |search Status=*|stats count by JobType
Hey there! I've set up Splunk Enterprise using AWS AMI. Now, I'm attempting to install the Splunk Essentials app, but I'm running into some issues. First, when I tried to upload the .tgz zip file, ... See more...
Hey there! I've set up Splunk Enterprise using AWS AMI. Now, I'm attempting to install the Splunk Essentials app, but I'm running into some issues. First, when I tried to upload the .tgz zip file, it got blocked. Then, I attempted to install it through the marketplace, but my correct username and password from splunk.com aren't working. I'm not sure how to fix this. Any help would be appreciated. Thanks!