All Topics

Top

All Topics

Greetings, I'm finally tackling the topic of data models within my organization, and am coming across situations I am needing to solve for. 1. Windows authentication data which has a null values in... See more...
Greetings, I'm finally tackling the topic of data models within my organization, and am coming across situations I am needing to solve for. 1. Windows authentication data which has a null values in the src field, due to the type of authentication taking place. I understand that field aliasing comes into play, and I tried that - however, I tried aliasing a calculated field, which does not work of course. Now, I am having to go back to see if there is another field I can alias instead. I guess my ask with this post here is to get some strategies from other Splunk users who have tackled data cleanup and data models. Are null values acceptable for certain situations? Or, must every required data model field be complete? Such as action, app, dest, src, user etc.? I appreciate some feedback regarding this topic.
Hi, I am trying to monitor many exchange servers that are not configured the same. I was giving the paths to monitor containing an environment variable, such as  %ExchangeInstallPath%TransportRole... See more...
Hi, I am trying to monitor many exchange servers that are not configured the same. I was giving the paths to monitor containing an environment variable, such as  %ExchangeInstallPath%TransportRoles\Logs\FrontEnd\AgentLog\* Assuming splunkd runs under a user that can read the windows variable.   Is it possible to monitor like this?   [monitor://%ExchangeInstallPath%TransportRoles\Logs\FrontEnd\AgentLog]    Or   [monitor://$ExchangeInstallPath\TransportRoles\Logs\FrontEnd\AgentLog]     Being able to do this will prevent having to create multiple stanzas with different drives, like    [monitor://C:\Program Files\Microsoft\Exchange Server\...\TransportRoles\Logs\FrontEnd\AgentLog\*] [monitor://D:\Program Files\Microsoft\Exchange Server\...\TransportRoles\Logs\FrontEnd\AgentLog\*] [monitor://E:\Program Files\Microsoft\Exchange Server\...\TransportRoles\Logs\FrontEnd\AgentLog\*]     If there are any other suggests (other than the obvious, like standardizing installs) please advise. Thank you
Hello, Please help me identify my issue maybe I'm missing something I don't see. I created simple powershell script to get data from Certificate Authority server (using certutil command) then packa... See more...
Hello, Please help me identify my issue maybe I'm missing something I don't see. I created simple powershell script to get data from Certificate Authority server (using certutil command) then package as a splunk application. After I deployed the app in CA server with Splunk installed, then executed the script manually from powershell ISE, I can see I have an output from console. But during scheduled execution, there's no data in my index. No error in internal logs so I can't identify where is the issue. Any feedback will help. thanks. Also I already tried other workaround in other thread, still didn't work. (like using .path file, powershell stanza etc..)   My .bat file @ECHO OFF Powershell.exe -executionpolicy remotesigned -File "%~dpn0.ps1" inputs.conf [script://.\bin\scripts\get_ca_issued_certs.bat] disabled = 0 index = cert_authority_idx sourcetype = ca_issued_certs interval = 300 Internal logs: 5:41:24.397 AM   02-22-2023 05:41:24.397 -0800 INFO ExecProcessor [6372 ExecProcessor] - New scheduled exec process: "C:\Program Files\Splunk\etc\apps\cert_authority_win_uf\bin\scripts\get_ca_issued_certs.bat"   Output when manually executed. Date=2023-02-22_06:02:00_-08:00,object=Cert Authority,counter=Issued Certs Expiry,RequestID=4,RequesterName=NT AUTHORI TY\IUSR,SerialNumber=2a0000000455e56fc1482ef85f000000000004,NotAfter=2/21/2024 7:37 AM,Value=364 Date=2023-02-22_06:02:00_-08:00,object=Cert Authority,counter=Issued Certs Expiry,RequestID=5,RequesterName=NT AUTHORI TY\IUSR,SerialNumber=2a000000052914506fdbd37f24000000000005,NotAfter=2/21/2024 7:39 AM,Value=364
Hi Folks,   I have a SHC 3 members  with splunk ES, currently when the ES trigger a notable, the notable trigger 3 times the throttling is correctly configured.   By my opinion the SHC out of syn... See more...
Hi Folks,   I have a SHC 3 members  with splunk ES, currently when the ES trigger a notable, the notable trigger 3 times the throttling is correctly configured.   By my opinion the SHC out of sync do you have any suggestions?   Regards
Our O365 API keys are expiring, and we are attempting to updated them. While doing so we have a couple questions. Are there different Splunk instances for the search head, indexer, and data manager... See more...
Our O365 API keys are expiring, and we are attempting to updated them. While doing so we have a couple questions. Are there different Splunk instances for the search head, indexer, and data manager? If yes, what are the URLs. We are having difficulty locating a knowledge base on how to update the API keys. Could you please provide the relevant knowledge base? Thanks
I'm trying to create a drilldown for a single value panel. I want my user to be able to click on the value, and it will load new panel with all details. I have set token but not sure where to pass in... See more...
I'm trying to create a drilldown for a single value panel. I want my user to be able to click on the value, and it will load new panel with all details. I have set token but not sure where to pass in detailed panel so that drill down works. PFA : here is my single value panel query | eval A = if(DURATION>30, "Long Duration Jobs","Duration") | stats count by A | where A="Long Duration Jobs" ----  i have another panel which shows details of these long duration jobs: | eval Duration = if(DURATION>30, "Long Duration Jobs", "Duration") | search Duration = "Long Duration Jobs" | rename EXEC_DATE_TIME as Datetime SERVER_NAME as "System Name" JOB_NAME as "Job Name" STATUS_NAME as "Status" EXEC_DATETIME as "Execution Datetime" DURATION as "Duration(s)" DELAY as "Dealy(s)" JOB_COUNT as "Job Count" JBCREATED_BY as "Job Createdby " SDL_DATETIME as "SDL Datetime" | table Datetime "System Name" "Job Name" "Execution Datetime" "Status" Duration "Duration(s)" "Dealy(s)" "Job Count" "Job Createdby" "SDL Datetime" How to connect these 2 panles so that when i click on single value, the detailed panel should pop up. Please suggest        
Hi Team,   We are planning to migrate the heavy forwarders to the new servers. We have some apps in the Heavy forwarder like dbconnect  Question:  Any Prechecks needs to before migrate and any ... See more...
Hi Team,   We are planning to migrate the heavy forwarders to the new servers. We have some apps in the Heavy forwarder like dbconnect  Question:  Any Prechecks needs to before migrate and any other changes we should ask app team to change because we are getting many inputs from the app team to the via HEC tokens and the dbconect. Can you please assist me on this task.   Thanks  
Hello Splunk Community, I followed different guides and docs for trying to install the Docker universal forwarder but none of them worked. When I try to execute the splunk binary the splunk in the ... See more...
Hello Splunk Community, I followed different guides and docs for trying to install the Docker universal forwarder but none of them worked. When I try to execute the splunk binary the splunk in the container appears trying to update itself and stucks: I ran the image with this docker-compose.yml:     version: '3.5' networks: splunk: name: splunk-test services: # Splunk Universal Forwarder: splunk-forwarder: container_name: uf1 image: splunk/universalforwarder:latest restart: always ports: - "9997:9997" volumes: - ./splunkforwarder-etc:/opt/splunkforwarder-etc - ./SPLUNK_HOME_DIR:/opt/splunkforwarder environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_PASSWORD=lwetem21 - SPLUNK_STANDALONE_URL=https://<MY Splunk Enterprise DNS Name>:8000 networks: - splunk     It stops with this output:     [splunk@8de54aed8c1f splunkforwarder]$ pwd /opt/splunkforwarder [splunk@8de54aed8c1f bin]$ ./splunk add forward-server idx1.mycompany.com:9997 Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" Error calling execve(): No such file or directory Error launching command: No such file or directory execvp: No such file or directory Do you agree with this license? [y/n]: y This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a .deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n] y -- Migration information is being logged to '/opt/splunkforwarder/var/log/splunk/migration.log.2023-02-22.10-57-49' -- Migrating to: VERSION=9.0.4 BUILD=de405f4a7979 PRODUCT=splunk PLATFORM=Linux-x86_64 Error calling execve(): No such file or directory Error launching command: Invalid argument     The mentioned log btw is an empty file.   I pulled the latest image from: https://hub.docker.com/r/splunk/universalforwarder  https://kinneygroup.com/blog/splunk-universal-forwarders/   What am I doing wrong or there better guides to follow than the links that I have already provided.     With kind regards, CJ
Configuration is recognized but not applied. /opt/splunk/etc/apps/jk_cjbeck/local/props.conf SEDCMD-StripHeader = s/^[^{]+//
Are there any APIs for Splunkbase, I want to get the list of all apps available in Splunkbase with the below-mentioned information. 1. splunk app name 2. splunk folder name 3. app version 4. ... See more...
Are there any APIs for Splunkbase, I want to get the list of all apps available in Splunkbase with the below-mentioned information. 1. splunk app name 2. splunk folder name 3. app version 4. compatibility (like the app is compatible with Splunk version 7/8/9) 5. CIM compatibility
Hi All,   i saw that there have been previous posts about this topic, but none resolves my issue. I have created a IAM role with CloudWatchFullAccess and i assigned it to the EC2 Instance Splu... See more...
Hi All,   i saw that there have been previous posts about this topic, but none resolves my issue. I have created a IAM role with CloudWatchFullAccess and i assigned it to the EC2 Instance Splun k is running on. And the role was auto-discovered and when setting up an Input for CloudWatch i was able to use that role. So sofar everything is peachy, but i get no metrics in the configured Index. Am I missing a policy for the IAM Role?   Kind regards, Mike
Hi Team I am using Splunk Enterprise with trial user account which is available for 60 days. How to upgrade the trial user account with feature like minimal data ingestion limit (1GB/day) includi... See more...
Hi Team I am using Splunk Enterprise with trial user account which is available for 60 days. How to upgrade the trial user account with feature like minimal data ingestion limit (1GB/day) including search feature and search API accessibility.  Please suggest what upgrade options available for trial user.
HI! I want to make the log below in the form of the table below. What should I do with the spl? [Log ex.] [2023.01.23] TYPE : UPDATE, USER : master, [ ID : jenny, TYPE- AUTH :  AB, O, B, A] [tab... See more...
HI! I want to make the log below in the form of the table below. What should I do with the spl? [Log ex.] [2023.01.23] TYPE : UPDATE, USER : master, [ ID : jenny, TYPE- AUTH :  AB, O, B, A] [table] USER ID TYPE-AUTH master jenny AB O B A   I did SPL as below, and the dashboard comes out as below. HELP ME PLZ... T. T [SPL] | rex field=TYPE-AUTH max_match=0 "(?P<type_auth>\w+)" USER ID TYPE-AUTH master jenny AB
Hi, I have to rearrange below columns in below order i.e. 31-60 Days, 61-90 Days, 91-120 Days,151-180 Days,Over 180 Days, Total Query:  | inputlookup ACRONYM_Updated.csv |stats count by A... See more...
Hi, I have to rearrange below columns in below order i.e. 31-60 Days, 61-90 Days, 91-120 Days,151-180 Days,Over 180 Days, Total Query:  | inputlookup ACRONYM_Updated.csv |stats count by ACRONYM Aging |xyseries ACRONYM Aging count |addtotals  
Hi there -  trying to get foreach statement to apply conditional statement. Essentialy in the eval statement tried a variety of if with options like IN statements (or alternatively but less preferabl... See more...
Hi there -  trying to get foreach statement to apply conditional statement. Essentialy in the eval statement tried a variety of if with options like IN statements (or alternatively but less preferably a long OR to replace the IN statement )-  but frankly not having any luck.  foreach Perc_In* [ eval Out_Of_Norm_For<<MATCHSTR>>=if(IN(<<MATCHSTR>>,"_Range_4","_RANGE_4_to_6"),"Consider","Ignore") ]  If the <<matchstr>> falls in the set of values "_Range_4" or  "_RANGE_4to_6", then the new field  Out_Of_Norm_For<<MATCHSTR>> should take a value of consider - else it takes a value of Ignore
I've a query   index="main" app="student-api" "tags.path"=/enroll "response"=succcess   which also gives a trace_id and then I've   index="main" app="student-api"   which gives a st... See more...
I've a query   index="main" app="student-api" "tags.path"=/enroll "response"=succcess   which also gives a trace_id and then I've   index="main" app="student-api"   which gives a student_id. I want to get the latest timestamp of enrollment (by joining the results) for each student_id (stored in a csv). The output would look like - student_id| latest timestamp of enrollment Please suggest the steps to follow. I tried   index="main" app="student-api" tags.student_id | join type=inner trace_id [| search index="main" app="student-api" "tags.path"="/enroll" "response"=success]   for the join, but it's not yielding the result. Also how to inputlookup the student_id from csv? Appreciate your help with this. Thanks @ITWhisperer @gcusello 
I need to create a correlation search that would trigger an alert if it found a match from IPs from: | inputlookup ip_spywarelist.csv  against an indexer (i.e: index=FW) Any step-by-step guidance?
I have written a splunk query to extract timeout logs for my functions and am creating a scheduled alert. I have created an email alert action. For the email subject, I want the function name to appe... See more...
I have written a splunk query to extract timeout logs for my functions and am creating a scheduled alert. I have created an email alert action. For the email subject, I want the function name to appear in the subject line. I have tried using $result.fieldname$ and $job.label$ in the subject but it does not give the desired output. For example, if my function test_func fails, I want the subject to look like 'Job Failure for test_func'. For this, I am coding the Subject field in the alert as 'Job Failure for $result.function_name$'. But, it just sends an email alert with subject as 'Job Failure for '. I have also tried using other tokens like '$job.label$', but I couldn't get the desired output. Can somebody please pitch in?
I have set up the Universal Forwarder locally in my machine using this guide https://splunk.paloaltonetworks.com/universal-forwarder.html /opt/splunkforwarder/etc/system/local/inputs.conf    ... See more...
I have set up the Universal Forwarder locally in my machine using this guide https://splunk.paloaltonetworks.com/universal-forwarder.html /opt/splunkforwarder/etc/system/local/inputs.conf     [monitor:///var/log/udp514.log] sourcetype = pan:log disabled =0     /opt/splunkforwarder/etc/system/local/outputs.conf     [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = andrea-xps-15-7590:9997 disabled=false [tcpout-server://andrea-xps-15-7590:9997]     (the local ip becomes 'andrea-xps-15-7590' same for the web UI) I have checked that syslog actually send logs event into the file /var/log/udp514.log so I am sure the logs are there. Port 9997 has been allowed on splunk UI (Forwarding and receiving settings). However  when I do a search : source="/var/log/udp514.log" nothing shows up. Also splunk throws a message: 'The TCP output processor has paused the data flow. Forwarding to host_dest=andrea-xps-15-7590 inside output group default-autolb-group from host_src=andrea-XPS-15-7590 has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.' I understand data have been forwarded from host_src but the not indexer for some reason does not ingest them so it get blocked? Any idea where the problem is?      
I am feeling puzzled. I am trying to take a date, convert it to epoch time, and then subtract a number of seconds from that time... then reconstruct it back to a human readable format. I have a fi... See more...
I am feeling puzzled. I am trying to take a date, convert it to epoch time, and then subtract a number of seconds from that time... then reconstruct it back to a human readable format. I have a field called "eventTime" that comes in looking like this: 2023-02-20T22:33:00.000Z I am converting it to epoch time like so: | eval eventTime=strptime(eventTime,"%Y-%m-%dT%H:%M:%S.%3QZ") I have then converted to the server time, like so: | eval eventTime=strftime(eventTime, "%+") After those steps, the value in "eventTime" looks like so: Mon Feb 20 22:33:03 MST 2023 I am then attempting to convert to epoch like so: | eval event_etime=strptime(eventTime, "%a %b %e %H:%M:%S %Z %Y") This works, and converts it to this value: 1677034564.000000 Everything works as I would expect thus far... it is when I attempt to do any sort of math, that it turns the value to null. So, with this statement: | eval event_etime=tonumber(event_etime)-25200 I am attempting to subtract 23,200 seconds off the time... but when I do this step, the value goes null. I have tried with and without the "tonumber" function... it doesn't do a thing. Any ideas on how I can subtract "25200" from the epoch time, and retain a value that is not null?