All Posts

Top

All Posts

In order to bin the Event time, you need to keep it as a number (after parsing with strptime). You can format it as a string later or use fieldformat for display purposes   index=test1 sourcetype=t... See more...
In order to bin the Event time, you need to keep it as a number (after parsing with strptime). You can format it as a string later or use fieldformat for display purposes   index=test1 sourcetype=test2 | eval Event_Time=strptime(SUBMIT_TIME,"%Y%m%d%H%M%S") | table Event_Time ``` This next line is redundant since you only have Event_Time to the nearest second anyway ``` | bin Event_Time span=1s | sort 0 Event_Time | fieldformat Event_Time=strftime(Event_Time, "%m/%d/%y %H:%M:%S")  
Ideally you'd be able to chunk the Json log event into smaller subunits, but this depends on what your JSON log event looks like. If your json log events are over 10k characters long, they may be ge... See more...
Ideally you'd be able to chunk the Json log event into smaller subunits, but this depends on what your JSON log event looks like. If your json log events are over 10k characters long, they may be getting truncated. If this is the case, you can override the truncation by putting the following setting in a props.conf file on the indexing machines: [<yoursourcetype>] TRUNCATE = <some number above the size of your json logs, or 0 for no truncation> If your broken json logs in Splunk are less than 10k characters long, then it could be that Splunk is splitting the logs part-way through the json object, so you would need to set the LINE_BREAKER field so that it properly splits whole json objects.
Can someone assist on this request please? Thank you.
Since you already have applicationName=" as your prefix, this line index=mulesoft environment=$env$ applicationName=$BankApp$ InterfaceName=$interface$ will expand to index=mulesoft environment=$e... See more...
Since you already have applicationName=" as your prefix, this line index=mulesoft environment=$env$ applicationName=$BankApp$ InterfaceName=$interface$ will expand to index=mulesoft environment=$env$ applicationName=applicationName="*" InterfaceName=InterfaceName="*" Either remove applicationName= from your prefix or from your search index=mulesoft environment=$env$ $BankApp$ $interface$
We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of the Splunk community that have shown consistent participation in at least one area of our c... See more...
We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of the Splunk community that have shown consistent participation in at least one area of our community programs. The Splunk MVP program, a complement to our longstanding SplunkTrust program, is one way the community team is able to recognize and celebrate our star contributors. The Splunk community is built upon the efforts of its members and the contributions they make to help their peers find success. We are incredibly thankful for your contributions! How do I become a Splunk MVP? Do you love answering questions about Splunk SOAR in Splunk Answers? That’s one way! Driving fabulous engagement as a leader of your Splunk User Group? That’s another way! Rockin’ out the Slack contributions? Chances are, you’re already well on your way! Splunk MVP selection focuses on community contributions, prioritizing engagement and support, in whatever areas you might be most passionate about.  Introducing our 2024 cohort of Splunk MVPs: First and foremost, let us say, THANK YOU! Our programs are built around your knowledge of Splunk, domain expertise, and passion to help your peers find success. And without further ado, our 2024 MVPs are… Uday Agrawal Marc Andersen Pedro Borges Steve Baker Johnny Blizzard Mark Cooke Marius Cuanda Eric Favreau Andrew Gerber Alain Giammarinaro Suman Gajavelly Lakshmanan Ganesan Adam Gold Taruchit Goyal Martin Hettervik Jason Hotchkiss Dal Jeanis Aaron Johnsen Rohit Joshi Nancy Kafer Chandra Sekhar Kolla Mike Langhorst Magnus Lord Alex Lu Rakesh Luhar Jo Øverland Guilhem Marchard Andrew McManus Manjunath Meti Phil Meyerson Renjith Nair Deyan Petrov Michael Pitcher Chris Risley Jean-Philippe Rosse David Rutstein Ryan Plas Gregory Rivas Rafael Santos Nadine Shillingford Matt Snyder Paul Schultze Trevor Scroggins Raja Sha Meet Shah Ayush Sharma Young So Somesh Soni Adam Denham Smith George Starcher Brandon Sternfield Davin Studer Erling Teigen Edoardo Vicendone Michael Uschmann Congratulations once again to the 2024 cohort of Splunk MVPs, and we hope to see some of you at .conf in June! The Splunk Community Team
Hi, I'm currently ingesting CSV files to Splunk. One of the fields record actual Event Timestamp in this format YYYYmmddHHMMSS (e.g. 20240418142025). I need to format this field's value in a way th... See more...
Hi, I'm currently ingesting CSV files to Splunk. One of the fields record actual Event Timestamp in this format YYYYmmddHHMMSS (e.g. 20240418142025). I need to format this field's value in a way that Splunk will understand the data (e.g. date, hour, minutes, second etc.). Once this formatting is complete, I need to sort these time stamps/events for each Second (e.g. bucket span=1s Event_Time). Note here Event_Time is the formatted data from original Event Timestamp field. So far, I've tried this: index=test1 sourcetype=test2 | eval Event_Time=strftime(strptime(SUBMIT_TIME,"%Y%m%d%H%M%S"), "%m/%d/%y %H:%M:%S") | table Event_Time Above command gives me decent output such as 04/18/24 14:20:25. But, when I try to group values of Event_Time using "bucket span=1s Event_Time", it does not do anything. Note that "bucket span=1s _time" works as I'm using Splunk default time field. Appreciate any help to make this formatting work for post processing Event_Time. Thank you in advance.
I am struggling to find a post for my answer because the naming for Splunk Enterprise and Enterprise Security is so similar and I am only seeing results for ES.. I want to find a way to add Threat I... See more...
I am struggling to find a post for my answer because the naming for Splunk Enterprise and Enterprise Security is so similar and I am only seeing results for ES.. I want to find a way to add Threat Intelligence feeds into my Splunk Enterprise environment so my organization can eventually move off of the other SIEM we have been using in tandem with Splunk.  Is this possible with Splunk Enterprise? I know ES has the capability but we are strictly on-prem at the moment and I do not see us moving to it anytime soon. Any suggestions? Has anyone set these up on prem?
@richgalloway  : Sorry I did not get what rule you are mentioning. Could you please be more clear on this ? 434531263412:us-west-2:lambda_functions -> lambda_functions 434531263412:us-west-2:nat_... See more...
@richgalloway  : Sorry I did not get what rule you are mentioning. Could you please be more clear on this ? 434531263412:us-west-2:lambda_functions -> lambda_functions 434531263412:us-west-2:nat_gateways -> gateways 434531263412:us-west-2:application_load_balancers -> load_balancers yes , this is the requirement. In the above , right side values are the values from source field. I want to extract service name from this field value.
Any luck with support?  I tried the outputs.conf solution in this thread but it doesn't seem to have worked.     Pre-upgrade from 9.0.x to 9.2.1 I had 300ish clients in my DS.  right now only 14 ar... See more...
Any luck with support?  I tried the outputs.conf solution in this thread but it doesn't seem to have worked.     Pre-upgrade from 9.0.x to 9.2.1 I had 300ish clients in my DS.  right now only 14 are showing up.   Thanks, Dave
Installing and Configuring the Cisco AppDynamics Smart Agent and Machine Agent on Ubuntu Linux" Download The Smart Agent from Go to download.appdynamics.com Click on the dropdown 'Type'... See more...
Installing and Configuring the Cisco AppDynamics Smart Agent and Machine Agent on Ubuntu Linux" Download The Smart Agent from Go to download.appdynamics.com Click on the dropdown 'Type' and find AppDynamics Smart Agent for Linux ZIP You can curl the download as well to your Linux box. AppDynamics Smart Agent requires pip3 to be present and appdsmartagent folder should be where we will install smart agent, So if you don’t have it. Please run the below-> mkdir /opt/appdynamics/appdsmartagent sudo apt update sudo apt install -y python3-pip cd /opt/appdynamics/appdsmartagent​ Once you have these setup, curl the Zip artifactory curl -L -O -H "Authorization: Bearer xxxxx.xxxx.xxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxx;" "https://download.appdynamics.com/download/prox/download-file/appdsmartagent/24.2.0.1472/appdsmartagent_64_linux_24.2.0.1472.zip" ​ Unzip the content and run install-script.sh, In this case unzip appdsmartagent_64_linux_24.2.0.1472.zip unzip appdsmartagent_64_linux_24.2.0.1472.zip​ Run the install script placed in the appdsmartagent folder ./install-script.sh​ Add the configuration inside of /opt/appdynamics/appdsmartagent/config.ini vi /opt/appdynamics/appdsmartagent/config.ini​ You are required to configure Smart Agents to register with the Controller. Edit the  /opt/appdynamics/appdsmartagent/config.ini  file for the required Smart Agent configuration. Ensure that you update the following parameters: ControllerURL : The URL of the Controller on which you want to establish the connection with the Smart Agent. ControllerPort  : FMServicePort:  The port to which the Smart Agent connects to the FM service (Agent Management). It is 8090 for an on-premises Controller and 443 for a SaaS Controller. AccountAccessKey : The account access key on the Controller. AccountName : The account name on the Controller to which the Smart Agent will report. An example for above: ControllerURL = https://support-controller.saas.appdynamics.com ControllerPort = 443 FMServicePort = 443 AccountAccessKey = abcd-ahsasasj-asbasas AccountName = controllerces Once done. Start the Smart Agent with the below: systemctl start smartagent.service You can check the status  The first part is done. Now let’s install a Machine agent. Once your Agent is started Please go to controller.com/controller/#/apm/agent-management-install-agents On the Install Agents page, where it says "Select Agent Attributes' select 'Machine' Under 'Select where to Deploy Agents'  Add your Host where you installed Smart Agent and move to the left Click 'Apply' and then click 'Done Once this is done, you can install it with DEFAULT config, or you can set Customer Agent attributes.  If you wish to pass more information, you can get the key : value example from Cisco AppDynamics Docs: Ansible Configuration for Machine Agent.  For example: controller_account_access_key: "123key"   NOTE: Your controller-host, port, access-key, and accountName are already passed. You can select SIM (Server Visibility) and SSL enabled based on your needs. I am marking JRE Support enabled as well.  Ansible Configuration for Machine Agent In Linux, the Ansible role starts and runs the Machine agent as a service during installation, upgrade, and… Once done, the agent will take 10–15 minutes to show up on your Controller. It will be automatically installed and be in a running state. Logs can be located at /opt/appdynamics/machine-agent folder.  
OK. Time to dig into the gory details of Splunk licensing. When you have an enforcing license (either a trial, dev or "full" license not big enough to be non-enforcing), each day you exceed your dai... See more...
OK. Time to dig into the gory details of Splunk licensing. When you have an enforcing license (either a trial, dev or "full" license not big enough to be non-enforcing), each day you exceed your daily ingestion allowance will generate a warning. If you exceed given number of warnings during a given time period (with a trial version it's 5 warnings in 30-day rolling window; with a "full" Splunk Enterprise license it's 45 warnings in 60 day), your environment will go into a "violation mode". Most importantly - it will stop allowing you search any data other than internal indexes. And the tricky question is that even if you add new/bigger/whatever license at this point, it will not automatically "unlock" your environment. You need to either wait for the violations to clear (for some license types) or request a special unlock license from the Splunk sales team. So tl,dr -  if you let your Splunk run out of license, it's not as easy as "I add my freshly bought license" and it starts working again.
Yes, But its still showing same error  Error in 'search' command: Unable to parse the search: Comparator '=' has an invalid term on the left hand   side: applicationName=APPLICATION_NAME. ... See more...
Yes, But its still showing same error  Error in 'search' command: Unable to parse the search: Comparator '=' has an invalid term on the left hand   side: applicationName=APPLICATION_NAME.   This the query which i am using:     index=mulesoft environment=$env$ applicationName=$BankApp$ InterfaceName=$interface$ (priority="ERROR" OR priority="WARN") | stats values(*) as * by correlationId | rename content.InterfaceName as InterfaceName content.FileList{} as FileList content.Filename as FileName content.ErrorMsg as ErrorMsg | eval Status=case(priority="ERROR","ERROR",priority="WARN","WARN",priority!="ERROR","SUCCESS") | fields Status InterfaceName applicationName FileList FileName correlationId ErrorMsg message | where FileList!=" "  
Try changing the applicationName to APPLICATION_NAME in the prefix <input type="dropdown" token="BankApp" searchWhenChanged="true"> <label>ApplicationName</label> <choice value=... See more...
Try changing the applicationName to APPLICATION_NAME in the prefix <input type="dropdown" token="BankApp" searchWhenChanged="true"> <label>ApplicationName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | dedup APPLICATION_NAME | sort APPLICATION_NAME | table APPLICATION_NAME </query> </search> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>APPLICATION_NAME</fieldForValue> <default>*</default> <prefix>APPLICATION_NAME="</prefix> <suffix>"</suffix> </input> in the second look up, you are trying to filter with applicationName="" where as the lookup file seems to have APPLICATION_NAME as header
Your fieldForLabel has to be a field returned by the search query, which it isn't in both instances
Hi, I have installed cisco networks app and add-on. I have a labdata file with many events loaded to splunk. All data can be seen from search engine, but the app shows no result. Is it possible to us... See more...
Hi, I have installed cisco networks app and add-on. I have a labdata file with many events loaded to splunk. All data can be seen from search engine, but the app shows no result. Is it possible to use the labdata information on Cisco Networks? Should I add some configuration in order to it work?
To summarize: 434531263412:us-west-2:lambda_functions -> lambda_functions 434531263412:us-west-2:nat_gateways -> gateways 434531263412:us-west-2:application_load_balancers -> load_balancers If th... See more...
To summarize: 434531263412:us-west-2:lambda_functions -> lambda_functions 434531263412:us-west-2:nat_gateways -> gateways 434531263412:us-west-2:application_load_balancers -> load_balancers If this is correct then more information is needed.  What is the rule to use to determine how much of the service is to be used?
04-18-2024 13:36:06.590 ERROR EvalCommand [102993 searchOrchestrator] - The 'bit_shift_left' function is unsupported or undefined. I believe the function requires 9.2.0+ Thanks for noticing! ... See more...
04-18-2024 13:36:06.590 ERROR EvalCommand [102993 searchOrchestrator] - The 'bit_shift_left' function is unsupported or undefined. I believe the function requires 9.2.0+ Thanks for noticing!  I always assumed that bitwise operations had been part of SPL from day one but no.  The document has this footer: "This documentation applies to the following versions of Splunk® Enterprise: 9.2.0, 9.2.1." (Searching in previous versions results in the same pointers to 9.2.) For the above, should the second set have been given a different value for the field? Those are really bad copy-and-paste errors.  Corrected.
Thanks in advance . I am trying to fetch application name and inteface details from input lookup and match with the splunk query .But i am getting below error.  Error in 'search' command: U... See more...
Thanks in advance . I am trying to fetch application name and inteface details from input lookup and match with the splunk query .But i am getting below error.  Error in 'search' command: Unable to parse the search: Comparator '=' has an invalid term on the left hand side: applicationName=applicationName.       <input type="dropdown" token="BankApp" searchWhenChanged="true" depends="$BankDropDown$"> <label>ApplicationName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | dedup APPLICATION_NAME | sort APPLICATION_NAME | table APPLICATION_NAME </query> </search> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>APPLICATION_NAME</fieldForValue> <default>*</default> <prefix>applicationName="</prefix> <suffix>"</suffix> </input> <input type="dropdown" token="interface" searchWhenChanged="true" depends="$BankDropDown$"> <label>InterfaceName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | search $BankApp$ | sort INTERFACE_NAME | table INTERFACE_NAME </query> </search> <fieldForLabel>InterfaceName</fieldForLabel> <fieldForValue>INTERFACE_NAME</fieldForValue> <default>*</default> <prefix>InterfaceName="</prefix> <suffix>"</suffix> </input>    
Hi @Jerg.Weick, Thanks for your patience, Eng has confirmed it's a bug and is expected to be fixed in 24.4. Which should hopefully be by mid-May.
You should just replace this  splunk_server=* and then it sends that to all search peers. I cannot recall what are those endpoints, but it’s something under config or configurations.