All Topics

Top

All Topics

We have a job that occasionally loops around the same code spewing out same set of messages [2 different messages from same job] - is it possible to identify processes where the last 2 messages match... See more...
We have a job that occasionally loops around the same code spewing out same set of messages [2 different messages from same job] - is it possible to identify processes where the last 2 messages match the previous 2 messages...   . . . message1 message2 message1 <-- starts repeating/looping here message2 message1 message2 message1 message2 . . Any help appreciated.   Mick
Hi Team,  I am reaching out to seek your valuable inputs regarding setting up restrictions on app-level logs under a particular index in Splunk. The use case is as follows: We have multiple appl... See more...
Hi Team,  I am reaching out to seek your valuable inputs regarding setting up restrictions on app-level logs under a particular index in Splunk. The use case is as follows: We have multiple application logs that fall under a single index. However, we would like to set up restrictions for a specific app name within that index. While we are aware of setting up restrictions at the index level, we are wondering if there is a way to further restrict access to logs at the app level. Our goal is to ensure that only authorized users have access to the logs of the specific app within the designated index. Thank you in advance for your assistance and expertise. We look forward to your valuable inputs
hi Splunk Gurus Looking for some help please I am trying to extract timestamp from json sent via hec token. I have my inputs.conf and props.conf in same app and are deployed on heavy forwarders... See more...
hi Splunk Gurus Looking for some help please I am trying to extract timestamp from json sent via hec token. I have my inputs.conf and props.conf in same app and are deployed on heavy forwarders. My props:   [hec:azure:nonprod:json] MAX_TIMESTAMP_LOOKAHEAD = 512 TIME_PREFIX = createdDateTime\"\:\s+\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S%Z TZ = UTC   Sample event:   {"@odata.type": "#microsoft.graph.group", "id": "XXXXXXXXXXXXX", "deletedDateTime": null, "classification": null, "createdDateTime": "2022-06-03T02:05:02Z", "creationOptions": [], "description": null, "displayName": "global_admin", "expirationDateTime": null, "groupTypes": [], "isAssignableToRole": true, "mail": null, "mailEnabled": false, "mailNickname": "XXXX", "membershipRule": null, "membershipRuleProcessingState": null, "onPremisesDomainName": null, "onPremisesLastSyncDateTime": null, "onPremisesNetBiosName": null, "onPremisesSamAccountName": null, "onPremisesSecurityIdentifier": null, "onPremisesSyncEnabled": null, "preferredDataLocation": null, "preferredLanguage": null, "proxyAddresses": [], "renewedDateTime": "2022-06-03T02:05:02Z", "resourceBehaviorOptions": [], "resourceProvisioningOptions": [], "securityEnabled": true, "securityIdentifier": "XXXXXXXXXXXXX", "theme": null, "visibility": "Private", "onPremisesProvisioningErrors": [], "serviceProvisioningErrors": [], "graphendpointtype": "directoryroles"}   Wanted to extract timestamp from createdDateTime field.  I tried TIMESTAMP_FIELDS = createdDateTime,  and INGEST_EVAL=_time=strptime(spath(_raw,"createdDateTime"), "%Y-%m-%dT%H:%M:%S%Z") as per previous answers.splunk posts but nothing worked, splunk still picks up index time only. What am I doing wrong here? 
  Hi @niketn How to perform an auto refresh once domain/data entity is selected to the result populates. Next step when i try to click the second option from dropdown,the older result still remain... See more...
  Hi @niketn How to perform an auto refresh once domain/data entity is selected to the result populates. Next step when i try to click the second option from dropdown,the older result still remain. Query used:       <input type="dropdown" token="tokSystem" searchWhenChanged="true">         <label>Domain Entity</label>         <fieldForLabel>$tokEnvironment$</fieldForLabel>         <fieldForValue>$tokEnvironment$</fieldForValue>         <search>           <query>| makeresults           | eval goodsdevelopment="a",materialdomain="b,c",costsummary="d"</query>         </search>         <change>           <condition match="$label$==&quot;a&quot;">             <set token="inputToken">test</set>             <set token="outputToken">test1</set>           </condition>           <condition match="$label$==&quot;c&quot;">             <set token="inputToken">dev</set>             <set token="outputToken">dev1</set> <condition match="$label$==&quot;m&quot;"> <set token="inputToken">qa</set> <set token="outputToken">qa1</set> </condition> </change> ------ <row> <panel> <html id="messagecount"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">INBOUND </h2> </html> </panel> </row> <row> <panel><table> <search> <query>index=$indexToken1$ source IN ("/*-*-*-$inputToken$")  | timechart count by ObjectType```| stats count by ObjectType</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> ------<row> <panel> </style> <h2 id="user">outBOUND </h2> </html> <chart> <search> <query>index=$indexToken$ source IN ("/*e-f-$outputToken$-*-","*g-$outputToken$-h","i-$outputToken$-j") </query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search>    
Hello, We're using Splunk  Enterprise version 9.1.0.2 and trying to configure Splunk to send email alerts but cannot make it work. We've tried both Gmail and O365, here are the errors: 1. Email s... See more...
Hello, We're using Splunk  Enterprise version 9.1.0.2 and trying to configure Splunk to send email alerts but cannot make it work. We've tried both Gmail and O365, here are the errors: 1. Email settings: Mail host: smtp.gmail.com:587, Enable TLS, enter Username and password (we use app password for smtp.gmail.com) --> Error: sendemail:561 - (530, b'5.7.0 Authentication Required. Learn more at\n5.7.0 https://support.google.com/mail/?p=WantAuthError 5-20020a17090a1a4500b00274e610dbdasm2199058pjl.8 - gsmtp', 'sender@gmail.com') while sending mail to: receive@.... 2. Email settings: Mail host: smtp.office365.com:587, Enable TLS, enter Username and password (username and password can login to Outlook successfully) --> Error: sendemail:561 - (530, b'5.7.57 Client not authenticated to send mail. [SGAP274CA0001.SGPP274.PROD.OUTLOOK.COM 2023-09-21T02:01:45.399Z 08DBB9CB1E03821B]', 'sender@senderdomain.com') while sending mail to: receive@....   Please support. Thank you.
i have download my logs, from my server ,which is encode by "GBK" or GB2312' to my desktop in my computer, and getting the logs data in the platform by using the adding-data  function in the splunk w... See more...
i have download my logs, from my server ,which is encode by "GBK" or GB2312' to my desktop in my computer, and getting the logs data in the platform by using the adding-data  function in the splunk web-ui .the version of my splunk enterprise is 8.2.1.   when i setting the sourcetype in the process of adding the data  , i choose the  charset 'gb2312', and then to search the log in splunk, it correctly displayed.     however, when i  use the  inputs and outputs conf in splunk universal forwader to send the logs to the splunk platform,it correctly displayed but  hava some garbled code.    i have  followed the suggsetion Solved: Can splunk support GBK? - Splunk Community  but it can not solved my problem.  the below is my configuration in my server and UF. server: directory:/opt/splunk/etc/system/local/props.conf [mysourcetype] CHARSET = GB2312 UF: directory:/opt/splunk/etc/apps/search/local/inputs.conf [default] host = 10.31.xx.xx [monitor:///home/abc/log/20230921/*.log] index = abcd sourcetype = mysourcetype disabled = false
I am currently encountering a problem where I have a log file that will be archived to another folder after reaching certain conditions. I have set up UF monitoring for both files, but the data colle... See more...
I am currently encountering a problem where I have a log file that will be archived to another folder after reaching certain conditions. I have set up UF monitoring for both files, but the data collected may be duplicate. However, if I do not monitor the archive folder, some logs in the later positions will be lost in the file. I suspect it may be related to the file being archived too quickly? How to solve this problem for example,my log file is abc.log, and then, it will be archived to current path /debug/abc.1.log, I have set the monitor for both files, but the data is duplicate, however,  if i do not monitor current path /debug/abc.1.log, i will lose the content at the end of the file.    
When attempting to connect to AWS from within the AWS app I am receiving 'SSL validation failed for https://sts.us-east-1.amazonaws.com/ [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: u... See more...
When attempting to connect to AWS from within the AWS app I am receiving 'SSL validation failed for https://sts.us-east-1.amazonaws.com/ [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local certificate (_ssl.c:1106)' I received a cert.pem file from AWS, but I don't know where to put this file in splunk. splunk version is 8.2.8 Splunk Add-on for Amazon Web Services (AWS) version is 7.0.0 guide me where to apply the cert.pem file?   Thank you.  
We currently have an alert set up that generates a ticket in our ticketing platform. We are currently moving to a new ticketing platform and have utilized collect to collect the event and put it in a... See more...
We currently have an alert set up that generates a ticket in our ticketing platform. We are currently moving to a new ticketing platform and have utilized collect to collect the event and put it in a new index for that ticketing platform to pull data from. Is there a way to rename fields of the event that is collected, but not change the field names for the current alert? We have to have different field names for the new ticketing system to map correctly. My only idea right now is either duplicate the alert and have them run in parallel, or when the ticketing system queries splunk for new events, to have that query contain a search macro that does the renaming before the events are ingested,
I am trying to create a Dashboard that hold multiple table of WebSphere App Server configuration data.  The data I have looks like this: {"ObjectType ":"AppServer","Object":"HJn6server1","Order":"1... See more...
I am trying to create a Dashboard that hold multiple table of WebSphere App Server configuration data.  The data I have looks like this: {"ObjectType ":"AppServer","Object":"HJn6server1","Order":"147","Env":"UAT","SectionName":"Transport chain: WCInboundDefaultSecure:Channel HTTP", "Attributes":{"discriminationWeight": "10","enableLogging": "FALSE","keepAlive": "TRUE","maxFieldSize": "32768","maxHeaders": "500","maxRequestMessageBodySize": "-1","maximumPersistentRequests": "100","name": "HTTP_4","persistentTimeout": "30","readTimeout": "60","useChannelAccessLoggingSettings": "FALSE","useChannelErrorLoggingSettings": "FALSE","useChannelFRCALoggingSettings": "FALSE","writeTimeout": "60"}} Where every event is a configuration section within an appserver where: ObjectType - AppServer Object - Name of Appserver (ex. "HJn6server1") Env - Environment. (ex. Test, UAT, PROD) SectionName - name within the appserver configuration that holds attributes. Attributes - configuration attributes for a SectionName I have been able to create one table per SectionName, but can't extend that to multiple sections.  I used the following code to make one table:  index = websphere_cct (Object= "HJn5server1" Env="Prod") OR (Object = "HJn7server3" Env="UAT") SectionName="Process Definition" Order [ search index=websphere_cct SectionName | dedup Order | table Order ] | fields - _* | fields Object Attributes.* SectionName | eval Object = ltrim(Object, " ") | rename Attributes.* AS * | table SectionName Object * | fillnull value="" | transpose column_name=Attribute header_field=Object | eval match = if('HJn5server1' == 'HJn7server3', "y", "n") Output:  Attribute HJn7server3 HJn5server1 match SectionName Process Definition Process Definition y IBM_HEAPDUMP_OUTOFMEMORY     y executableArguments [] [] y executableTarget com.ibm.ws.runtime.WsServer com.ibm.ws.runtime.WsServer y executableTargetKind JAVA_CLASS JAVA_CLASS y startCommandArgs [] [] y stopCommandArgs [] [] y terminateCommandArgs [] [] y workingDirectory ${USER_INSTALL_ROOT} ${USER_INSTALL_ROOT} y   What I would like to do is to create as many tables as there are SectionNames for a given comparison between two Objects. But I cannot figure out how to modify the code for allowing to have several tables in one dashboard for multiple SectionNames with their associated Attributes for two appservers in comparison.  Please help.   
Hi, I have a dashboard that shows service tickets count based on different parameters.  Now I need to show a trend for current year and previous year for the duration selected by the user in the ti... See more...
Hi, I have a dashboard that shows service tickets count based on different parameters.  Now I need to show a trend for current year and previous year for the duration selected by the user in the time picker. For example, if the user selects time from Jan 1, 2023 to Apr 1, 2023 in the time picker , then I need to form a query to select the same duration of previous year( Jan 1, 2022 to Apr 1, 2022) and show the trend . How to create the previous year duration based on the duration selected in the time picker.  Please advise.
You can use data collectors to measure whether a new feature improves or degrades performance. CONTENTS  Introduction | Video | Resources | About the presenter  Video Length: 5 min 5 seconds  ... See more...
You can use data collectors to measure whether a new feature improves or degrades performance. CONTENTS  Introduction | Video | Resources | About the presenter  Video Length: 5 min 5 seconds  Leverage AppDynamics analytics to determine how a new feature introduced within an application proves to be an improvement or degradation. By creating and leveraging data collectors to look for a specific flag in a release, a new parameter or attribute, that indicates whether a feature is enabled, one can quickly gain insights as to whether there has been a performance improvement.   Additional Resources  Learn more about Configure Manual Data Collectors in the documentation.  About presenter Ivan Alba  Ivan AlbaLiving in Milan, Italy, Ivan is a Sales Engineer in the Southern Europe region. He handles all customer segments in Italy – from enterprise to commercial to public sector.   After starting out as an embedded software developer in the Telecom industry, he made the move to presales and sales to add a customer relationship role. A music-lover, he plays guitar and other instruments, enjoys landscape and astrophotography, and plays tennis, soccer, and many other sports.
We’re happy to announce the release of Mission Control 2.3 which includes several new and exciting features made available to Splunk Enterprise Security Cloud users. The new features, detailed below,... See more...
We’re happy to announce the release of Mission Control 2.3 which includes several new and exciting features made available to Splunk Enterprise Security Cloud users. The new features, detailed below, improve upon your user experience in Mission Control by:  Simplifying collaboration and note-taking within incidents Enabling faster investigations by improving load times and minimizing unnecessary refreshes Improving visibility into the value of your Splunk SOAR integration Enhancing the Threat Intelligence Management admin experience (for users in the AWS US East and US West regions today). Overall, this release continues the trajectory of Splunk Mission Control as a unified security operations application that brings together your threat detection, investigation and response workflows into a common work surface. As a reminder, Mission Control is available for all Splunk Enterprise Security Cloud users today and it’s quick and simple to enable the application. Below are more details and screenshots of our latest release and you can find more details like release notes on our documentation site.  Simpler Collaboration and Note-Taking  The notes and files features found within a response plan’s tasks are now available at the incident level, allowing greater collaboration and easier note taking. Access them within the side panel of your incident UI so that you can take notes while you peruse your other data.   Faster Task Completion with Dynamic UI Updates Tired of waiting for a spinner every time you update a task or change an incident’s status? Wait no more! With Mission Control’s dynamic UI updates you can now perform tasks in a fraction of the time with improved load times and a streamlined UX.  Improved Splunk SOAR Integration Visibility With improved visibility into your Splunk SOAR usage, you can learn more details instantly such as how many playbooks and actions you have run and how many connectors are being used. You can also see which playbook you are running the most and export usage reports if desired.  Easier Threat Intelligence Onboarding and Improved UX Several improvements to the Threat Intelligence Management admin experience (for ES Cloud users in the AWS US East and US West regions) have been released which allows you to save time as you more easily activate and deactivate your threat intelligence integrations within Mission Control.  For more information on Mission Control such as a product tour, demo videos, blog posts, white papers, and webinars, please visit our home page for Splunk Mission Control.  This post was co-authored by Kavita Varadarajan, Principal Product Manager for Splunk Mission Control.
Pagination in table only appears in Edit mode of Splunk dashboard not in View. Can we correct this?
I have a table in Database that I need to check every 30 minutes,starting from 7.00 AM in the morning. The first alert i.e. at 7.00 AM should send the entire table without any checking any conditions... See more...
I have a table in Database that I need to check every 30 minutes,starting from 7.00 AM in the morning. The first alert i.e. at 7.00 AM should send the entire table without any checking any conditions.  Next here I have a field from the table named ACTUAL_END_TIME. This column can have only any of the three values, first a timestamp in HH:MM:SS format, second a String In-Progress, and third is again a String NotYetStarted. I need to check this table every 30 mins, and only trigger the alert when all the rows of the column ACTUAL_END_TIME have only timestamp. NOTE: The alert should trigger only once per day. How do I setup this alert
Hi, I’m using splunk docker image with HEC to send log. I got Success message as the guideline. How could I query the log to see “hello world”, which was what I just sent?I tried a few search related... See more...
Hi, I’m using splunk docker image with HEC to send log. I got Success message as the guideline. How could I query the log to see “hello world”, which was what I just sent?I tried a few search related curl commands but all of them just returns a very long xml. “hello world” is not in the response. Such as   curl -k -u admin:1234567Aa! https://localhost:8089/services/search/jobs -d "search *"  Could anyways share me a search curl command that can return "hello world" that I sent? I only have one record so I don't need complicated filtering.
can't figure out how to indexing my data from zigbee2mgtt.  The logs are exported from Home assistance via syslog, as Json.  I have tried various settings in props on the forwarder. Current sett... See more...
can't figure out how to indexing my data from zigbee2mgtt.  The logs are exported from Home assistance via syslog, as Json.  I have tried various settings in props on the forwarder. Current setting: [zigbee2mqtt] DATETIME_CONFIG = INDEXED_EXTRACTIONS = JSON category = structured NO_BINARY_CHECK = true TIMESTAMP_FIELDS = timestamp LINE_BREAKER = ([\r\n]+) disabled = false pulldown_type = true And on the search: Current: [zigbee2mqtt] KV_MODE = JSON And this is how the data appears in the log.  for me it looks like some kind mix, not just JSON data. Sep 20 19:13:19 linsrv 1 2023-09-20T17:13:19.941+02:00 localhost Zigbee2MQTT - - - MQTT publish: topic 'zigbee2mqtt/P001', payload '{"auto_off":null,"button_lock":null,"consumer_connected":true,"consumption":7.82,"current":0,"device_temperature":25,"energy":7.82,"led_disabled_night":null,"linkquality":255,"overload_protection":null,"power":0,"power_outage_count":3,"power_outage_memory":null,"state":"OFF","update":{"installed_version":41,"latest_version":32,"state":"idle"},"update_available":false,"voltage":234}'/n host = linsrv index = zigbee source = /disk1/syslog/in/linsrv/2023-09-20/messages.log sourcetype = zigbee2mqtt   Sep 20 19:08:13 linsrv06.hemdata.hemdata.se 1 2023-09-20T17:08:13.988+02:00 localhost Zigbee2MQTT - - - MQTT publish: topic 'zigbee2mqtt/P002', payload '{"auto_off":null,"button_lock":null,"consumer_connected":true,"consumption":2.58,"current":0,"device_temperature":23,"energy":2.58,"led_disabled_night":null,"linkquality":255,"overload_protection":null,"power":0,"power_outage_count":0,"power_outage_memory":null,"state":"OFF","update":{"installed_version":41,"latest_version":32,"state":"idle"},"update_available":false,"voltage":229}'/n host = linsrv index = zigbee source = /disk1/syslog/in/linsrv/2023-09-20/messages.log sourcetype = zigbee2mqtt   Sep 20 19:08:13 linsrv 1 2023-09-20T17:08:13.968+02:00 localhost Zigbee2MQTT - - - MQTT publish: topic 'zigbee2mqtt/P001', payload '{"auto_off":null,"button_lock":null,"consumer_connected":true,"consumption":7.82,"current":0,"device_temperature":25,"energy":7.82,"led_disabled_night":null,"linkquality":255,"overload_protection":null,"power":0,"power_outage_count":3,"power_outage_memory":null,"state":"OFF","update":{"installed_version":41,"latest_version":32,"state":"idle"},"update_available":false,"voltage":234}'/n host = linsrv index = zigbee source = /disk1/syslog/in/linsrv/2023-09-20/messages.logsourcetype = zigbee2mqtt   Sep 20 19:08:06 linsrv 1 2023-09-20T17:08:06.199+02:00 localhost Zigbee2MQTT - - - MQTT publish: topic 'zigbee2mqtt/P002', payload '{"auto_off":null,"button_lock":null,"consumer_connected":true,"consumption":2.58,"current":0,"device_temperature":23,"energy":2.58,"led_disabled_night":null,"linkquality":255,"overload_protection":null,"power":0,"power_outage_count":0,"power_outage_memory":null,"state":"OFF","update":{"installed_version":41,"latest_version":32,"state":"idle"},"update_available":false,"voltage":229}'/n host = linsrv index = zigbee source = /disk1/syslog/in/linsrv/2023-09-20/messages.log sourcetype = zigbee2mqtt  
Greetings, I have a search that list every index and what sourcetypes are contained within it. |tstats values(sourcetype) where index=* by index What I like about it is that I can see each ind... See more...
Greetings, I have a search that list every index and what sourcetypes are contained within it. |tstats values(sourcetype) where index=* by index What I like about it is that I can see each index and a list of all of the sourcetypes specific tot that index. I'm trying to see this same data format by with a column of the indexes and a column of all of the fields that index contains. I'm working on adding indexes to an app that already list what fields it needs but doesn't know what index they are associated to. So I have something like hash value fields md5 and MD5. They are different because of the source they come from but I need to find the index they live to add it. I also think it would just be useful for audit purposes so if expected fields can be confirmed at a glance. Please let me know if you have any questions. Thank you! Best, Brian
I'm looking to use the following as my timestamp.  What should I use in props as my timestamp format and timestamp prefix. [20230718:001541.421] : [WARN ]
Hello All, Im having an issue where my license has stopped showing that any data is getting ingested. The data is still coming in and everything looks to be good,  but the license is showing that n... See more...
Hello All, Im having an issue where my license has stopped showing that any data is getting ingested. The data is still coming in and everything looks to be good,  but the license is showing that no data is getting ingested but it is. Does anyone have a solution or ever ran into this problem?