All Topics

Top

All Topics

With the AWS Add-On for Splunk (version 5.0.3) we can pull logs from a CloudFront S3 bucket via the "Generic S3" type, or from an Application Load Balance with the ELB, Generic S3 type Input. The pr... See more...
With the AWS Add-On for Splunk (version 5.0.3) we can pull logs from a CloudFront S3 bucket via the "Generic S3" type, or from an Application Load Balance with the ELB, Generic S3 type Input. The problem is that the time format for the CloudFront logs are without timezone specified in the S3 objects and our splunk instance is incorrectly defaulting to localtime.  The ELB logs are correctly converted from UTC to localtime in searches. How might we force the timezone to UTC for these events at ingest? I tried creating a /opt/splunk/etc/apps/Splunk_TA_aws/local/props.conf file (yes, $SPLUNK_HOME is /opt/splunk) on our heavy forwarder and restarted with this content: [aws:cloudfront:accesslogs] TZ = UTC Alas, no dice yet.  Suggestions?
I would like to create a Pie chart to show how many calls took less than 100ms, 200ms, and 300ms.  index=star env=prod |search time > 100 | stats count by time   How can I append > 200 and >300 in... See more...
I would like to create a Pie chart to show how many calls took less than 100ms, 200ms, and 300ms.  index=star env=prod |search time > 100 | stats count by time   How can I append > 200 and >300 in the same query? 
I have Mitre App for Splunk installed in my Enterprise security.  I have the Mitre Dashboard up. I need help to create a repeatable process that will create custom content. I want to bring into the M... See more...
I have Mitre App for Splunk installed in my Enterprise security.  I have the Mitre Dashboard up. I need help to create a repeatable process that will create custom content. I want to bring into the Mitre Dashboard "Detection Catalog" which was created by one of our Engineers. I need help on how I can automate or create a repeatable process of putting this custom data content into the Mitre Dashboard. 
We see some hosts not reporting currently into Splunk in Oct 2021. When analyzed in its previous month's i.e. Sept 2021, those hosts were reporting to Splunk.   Do we have any query/method to fin... See more...
We see some hosts not reporting currently into Splunk in Oct 2021. When analyzed in its previous month's i.e. Sept 2021, those hosts were reporting to Splunk.   Do we have any query/method to find the exact time when these were reported last?   Thanks
I am trying to determine the length of spike to see if it goes beyond our requirements.   Here is a test of my search: index="database" source = IIQDB:* | fields _time, FileGrpName, source, sourc... See more...
I am trying to determine the length of spike to see if it goes beyond our requirements.   Here is a test of my search: index="database" source = IIQDB:* | fields _time, FileGrpName, source, sourcetype, database,Spaced_Used_Per, AvailSpaceMB, Value, SQL_Server_Process_CPU_Utilization,System_Idle_Process, Other_Process_CPU_Utilization,free_log_space_Perc, lag_seconds, Requests, host, server ,Task_Name, job, recent_failures, last_run, Target | rex field=host "^(?P<hostname>[^\.]+)" | rex field=Value "(?P<pctValue>.*)\%" | eval TasksPaused = if(sourcetype="mssql:AGS:TaskSchP",Task_Name, null()) | search TasksPaused="*" TasksPaused="Intel-TaskSchedule-FullTextIndexRefresh" host="agsprdb1.ed.cps.intel.com" | eval ptime=strptime(last_run,"%Y-%m-%d %H:%M:%S") | eval TimeDiff=(now()-ptime)/60 | sort _time | streamstats reset_on_change=true earliest(_time) as earlyTime latest(_time) as lastTime by TasksPaused | eval duration = (lastTime - earlyTime)/60   Some of it is extra from the whole search. I am trying to narrow down the problem with this section.   Wish we could post a picture of our timeline but I will simulate it here.                  /\                                     ---/\                                                       /--------\ --------/       \--------------------/           \-------------------------------/                   \--------------------------------------------
Hi,  I'm trying to use a lookup file inside an if statement, and it doesn't return any data. I would appreciate it if anyone could help me. Thanks! The lookup file has 4 columns (TenantName, tena... See more...
Hi,  I'm trying to use a lookup file inside an if statement, and it doesn't return any data. I would appreciate it if anyone could help me. Thanks! The lookup file has 4 columns (TenantName, tenantId, Region, DB) and my base search is returning 5 columns (_time, TenantName, tenantId, Region, Status). I need to find the database name (or DB) for each record, and it should be done by using tenantId in base search wherever tenantId is not "Unknown". <base search> | table _time TenantName tenantId Region Status | eval Database=if(tenantId!="Unknown", [| inputlookup myLookup | where tenantId=tenantId | return $DB], [| inputlookup myLookup | where TenantName=TenantName | return $DB])  
Hi everyone. I was watching some events from the internal logs and I saw so many events related to "ERROR AdminManagerDispatch - Admin handler 'alert_manager' not found.". Recently, I upgraded "aler... See more...
Hi everyone. I was watching some events from the internal logs and I saw so many events related to "ERROR AdminManagerDispatch - Admin handler 'alert_manager' not found.". Recently, I upgraded "alert manager" app from v2 to v3 but I do not know if that upgrade has a relation.   Does anyone know what could be happening?   Thank you so much
In my organization we are planning to install heavy forwarders for some domains. What are the hardware requirements for heavy forwarders?  What is the ratio of servers to heavy forwarder?
Hello All! Hope everyone can help. After we upgrade our splunk enterpriste to 8.2 we are getting this messages errors in our search heads clusters regarding to our indexers. Auto Load Balanced TCP ... See more...
Hello All! Hope everyone can help. After we upgrade our splunk enterpriste to 8.2 we are getting this messages errors in our search heads clusters regarding to our indexers. Auto Load Balanced TCP Output Root Cause(s): More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct.
Hello   Is it possible to run the search of a dashboard by using its ID?   Also, can I add fields to the search above? I.e. if a dashboard conducts this search: (index="mysource" earliest=-264h ... See more...
Hello   Is it possible to run the search of a dashboard by using its ID?   Also, can I add fields to the search above? I.e. if a dashboard conducts this search: (index="mysource" earliest=-264h latest=now()) | eval metric=case(index="mysource", '_time') ...   Can I do something like: search $dashboard_id$ = 'my_dashboard' | eval Timestamp=strftime(now(),"%d/%m/%Y %H:%M:00") | table A1 A2  Timestamp   I.e. append additional code?   Thanks!
Hello I upgraded recently to 3.7.0 in doing so I've encountered a few issues.  1. The bulk edit features dont show up in the UI and there arent any checkboxes as shown in docs. 2. the "Edit Incide... See more...
Hello I upgraded recently to 3.7.0 in doing so I've encountered a few issues.  1. The bulk edit features dont show up in the UI and there arent any checkboxes as shown in docs. 2. the "Edit Incident" button doesnt do anything 3. The doexternalworkflowaction field shows when it usually does not as well as the other field names arent the clean field names.   Any ideas on how I can resolve these issues? Thanks for the help.
I am trying to search for a number of events over a select period of time (4 hours) and then expand that to see how much of this traffic is in a 30 day period. I can use the time ranger picker for th... See more...
I am trying to search for a number of events over a select period of time (4 hours) and then expand that to see how much of this traffic is in a 30 day period. I can use the time ranger picker for the initial 4 hours, but when expand it, I am getting too much data.  Search I am using: index="Firewalls" action=blocked | stats count by client_ip | search count > 3500 | sort -count Is there a way to limit the results to be something like "search count > 3500 over 4 hours" and have the time range be 30 days?
Has anyone ever installed the Netwrix addon in Splunk? Having a bit of trouble with how to do so. 
  The file a bug link under the help menu goes here: http://www.splunk.com/r/bugs If you go there it asks you to log in then dumps you to the homepage. If you click on it again it takes you he... See more...
  The file a bug link under the help menu goes here: http://www.splunk.com/r/bugs If you go there it asks you to log in then dumps you to the homepage. If you click on it again it takes you here: https://splunkcommunities.force.com/customers/apex/CP_CaseSubmissionPage?caseID=NewCase  
I am trying to set a regex that works when i use say regexr.com but doesn't apply in my transforms/props file. I am wanting to not ingest any apache logs that contain:  assets/js, assets/css, assets... See more...
I am trying to set a regex that works when i use say regexr.com but doesn't apply in my transforms/props file. I am wanting to not ingest any apache logs that contain:  assets/js, assets/css, assets/img I can set one up singular, and it works fine, but the two commented out lines, even though they work in a regex case, don't seem to apply in my transforms file.  Any insight if I may be doing something wrong? Thank you for any assistance.   [drop_assets] REGEX = .*assets\/js.* #REGEX = .*(assets\/js|assets\/css|assets\/img).* #REGEX = .*assets/js.*|.*assets/css.*|.*assets/img.* DEST_KEY = queue FORMAT = nullQueue   [apache] TRANSFORMS-drop = drop_assets
Hi! I have a dropdown with one of the values being "Unknown", and I would like to have an option in the dropdown to show each value,  All, and also include an option to show "All except for Unknow... See more...
Hi! I have a dropdown with one of the values being "Unknown", and I would like to have an option in the dropdown to show each value,  All, and also include an option to show "All except for Unknown".  Has anyone been able to do so? Thank you very much!
Hi Team,   Wanted to enable SMB server audit logs in Splunk from UF or inputs.conf etc, can anyone please help with the configuration steps or any splunk docs for reference. Thanks in advance!! Th... See more...
Hi Team,   Wanted to enable SMB server audit logs in Splunk from UF or inputs.conf etc, can anyone please help with the configuration steps or any splunk docs for reference. Thanks in advance!! Thanks, Sharada Pandilla
Hi I have lots "Caused by:" in (single or  multiple) events How extract all line that contain "Caused by:" like this: Caused by: java.sql.SQLException: ISAM error: duplicate value for a record wi... See more...
Hi I have lots "Caused by:" in (single or  multiple) events How extract all line that contain "Caused by:" like this: Caused by: java.sql.SQLException: ISAM error: duplicate value for a record with unique key. Any idea? Thanks,
Hi - I have a command to clean fish buckets in a forwarder - if i want to take back in data for testing etc... cd var/lib/splunk/ rm -r fishbucket/ bin/splunk stop; cd var/lib/splunk/ ; rm -r fishbu... See more...
Hi - I have a command to clean fish buckets in a forwarder - if i want to take back in data for testing etc... cd var/lib/splunk/ rm -r fishbucket/ bin/splunk stop; cd var/lib/splunk/ ; rm -r fishbucket/ ;cd - ; rm -r var/ ; bin/splunk start But is there any way to clean fish buckets for only one source type?
We have Splunk Ent. (8.0) & ES.(6.4). What is a proper procedure to upgrade to Splunk Enterprise 8.2.2.1 to retain the settings & configurations we have done to ES (Enterprise Security)? What about S... See more...
We have Splunk Ent. (8.0) & ES.(6.4). What is a proper procedure to upgrade to Splunk Enterprise 8.2.2.1 to retain the settings & configurations we have done to ES (Enterprise Security)? What about Security Essentials we have installed. Any directions are much appreciated. Thanks a million.