All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @the_sigma , if the timestamp it's at the beginning of the event, you could try: TIME_PREFIX = ^\[ TIME_FORMAT = %Y%m%d:%H%M%S.%3N If it isn't at the end of the event, please share some sample ... See more...
Hi @the_sigma , if the timestamp it's at the beginning of the event, you could try: TIME_PREFIX = ^\[ TIME_FORMAT = %Y%m%d:%H%M%S.%3N If it isn't at the end of the event, please share some sample of your events, eventually masked, but with the same structure. Ciao. Giuseppe  
Hi @josephjohn2211, I suppose that you have these information in an index and when you say "table" you're speaking of an index, if not, please correct me. Anyway, if you already extracted fields (c... See more...
Hi @josephjohn2211, I suppose that you have these information in an index and when you say "table" you're speaking of an index, if not, please correct me. Anyway, if you already extracted fields (called timestamp, InProgress and NotYetStarted), you have to create a search checking the presence of values in the three fields to trigger when they are empty, something like this. index=ACTUAL_END_TIME NOT (InProgress=* NotYetStarted=*) If you have results the alert triggers. The alert must start to trigger at 7.00 but at what hour it must stop? in my sample I use 18:00, so you can schedule the alert using this cron expression: */30 7-18 * * * Please, if possible, avoid to use spaces, dots or special chars (as "-") in you field names, otherwise you have to use quotes for those fields. If instead you didn't extract fields, you should share some sample (both of  rows with the three fields and without them) so I can help you. Ciao. Giuseppe
We have a job that occasionally loops around the same code spewing out same set of messages [2 different messages from same job] - is it possible to identify processes where the last 2 messages match... See more...
We have a job that occasionally loops around the same code spewing out same set of messages [2 different messages from same job] - is it possible to identify processes where the last 2 messages match the previous 2 messages...   . . . message1 message2 message1 <-- starts repeating/looping here message2 message1 message2 message1 message2 . . Any help appreciated.   Mick
Hi @tr_newman, why don't you use two different alerts, one for each system with its own field names? Ciao. Giuseppe
Hi Team,  I am reaching out to seek your valuable inputs regarding setting up restrictions on app-level logs under a particular index in Splunk. The use case is as follows: We have multiple appl... See more...
Hi Team,  I am reaching out to seek your valuable inputs regarding setting up restrictions on app-level logs under a particular index in Splunk. The use case is as follows: We have multiple application logs that fall under a single index. However, we would like to set up restrictions for a specific app name within that index. While we are aware of setting up restrictions at the index level, we are wondering if there is a way to further restrict access to logs at the app level. Our goal is to ensure that only authorized users have access to the logs of the specific app within the designated index. Thank you in advance for your assistance and expertise. We look forward to your valuable inputs
Hi @Zane, you could put under monitoring both the folders. If you don't use the crcSal = <SOURCE> option, Splunk read only the last events in the rotated file and doesn't index twice the logs event... See more...
Hi @Zane, you could put under monitoring both the folders. If you don't use the crcSal = <SOURCE> option, Splunk read only the last events in the rotated file and doesn't index twice the logs event if tey have a different filename (Remember that the above option must to be not present!). Otherwise, if you rename the file before rotating (adding e.g. the new data to the file name), you can delay the rotation (30/60 seconds are sufficient) so Splunk will read also the last event in the file before i's moved to the new folder. Ciao. Giuseppe
Hi @Navanitha, try to install it also on HF. Ciao. Giuseppe
Hi @AL3Z, if you need a script outside Splunk, if check the presence of Splunk installation you have to check if the splunk folder (for Splunk server) or the splunkforwarder folder (for Universal Fo... See more...
Hi @AL3Z, if you need a script outside Splunk, if check the presence of Splunk installation you have to check if the splunk folder (for Splunk server) or the splunkforwarder folder (for Universal Forwarders). If you want to check the Splunk activity you have tosearch for the splunkd process that's present both in Windows and Linux. In linux is: ps -eafd | grep splunkd in windows there's the command tasklist but I'm not a windows specialist. Ciao. Giuseppe
Hi @nithys, it's the same thing, only one difference: you have to insert: * parenthesys in Token prefix and Token suffix, * field name, equal and quotes in Token valus prefix field=" * quotes in ... See more...
Hi @nithys, it's the same thing, only one difference: you have to insert: * parenthesys in Token prefix and Token suffix, * field name, equal and quotes in Token valus prefix field=" * quotes in Token value suffix " * OR in delimiter, remember to add a space before and after the OR. Ciao. Giuseppe P.S.: Karma Points are appreciated
hi Splunk Gurus Looking for some help please I am trying to extract timestamp from json sent via hec token. I have my inputs.conf and props.conf in same app and are deployed on heavy forwarders... See more...
hi Splunk Gurus Looking for some help please I am trying to extract timestamp from json sent via hec token. I have my inputs.conf and props.conf in same app and are deployed on heavy forwarders. My props:   [hec:azure:nonprod:json] MAX_TIMESTAMP_LOOKAHEAD = 512 TIME_PREFIX = createdDateTime\"\:\s+\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S%Z TZ = UTC   Sample event:   {"@odata.type": "#microsoft.graph.group", "id": "XXXXXXXXXXXXX", "deletedDateTime": null, "classification": null, "createdDateTime": "2022-06-03T02:05:02Z", "creationOptions": [], "description": null, "displayName": "global_admin", "expirationDateTime": null, "groupTypes": [], "isAssignableToRole": true, "mail": null, "mailEnabled": false, "mailNickname": "XXXX", "membershipRule": null, "membershipRuleProcessingState": null, "onPremisesDomainName": null, "onPremisesLastSyncDateTime": null, "onPremisesNetBiosName": null, "onPremisesSamAccountName": null, "onPremisesSecurityIdentifier": null, "onPremisesSyncEnabled": null, "preferredDataLocation": null, "preferredLanguage": null, "proxyAddresses": [], "renewedDateTime": "2022-06-03T02:05:02Z", "resourceBehaviorOptions": [], "resourceProvisioningOptions": [], "securityEnabled": true, "securityIdentifier": "XXXXXXXXXXXXX", "theme": null, "visibility": "Private", "onPremisesProvisioningErrors": [], "serviceProvisioningErrors": [], "graphendpointtype": "directoryroles"}   Wanted to extract timestamp from createdDateTime field.  I tried TIMESTAMP_FIELDS = createdDateTime,  and INGEST_EVAL=_time=strptime(spath(_raw,"createdDateTime"), "%Y-%m-%dT%H:%M:%S%Z") as per previous answers.splunk posts but nothing worked, splunk still picks up index time only. What am I doing wrong here? 
i have set this method, but it's still not working.     let me explain my sitution as below. I need to monitor the folders obtained by mounting Azure's "file share" (like pvc-xxxx), and the log ... See more...
i have set this method, but it's still not working.     let me explain my sitution as below. I need to monitor the folders obtained by mounting Azure's "file share" (like pvc-xxxx), and the log generation policy as i mentioned before,  it will generate new folder named today's date,  /mnt/xxx/2023-09-20 /mnt/xxx/2023-09-21 ....   and the log naming policy is  /mnt/xxx/2023-09-20/open-development-abcd.log /mnt/xxx/2023-09-20/open-development-efgh.log /mnt/xxx/2023-09-21/open-development-abcd.log /mnt/xxx/2023-09-21/open-development-efgh.log the log name is same, but the content is differernt,  and then, it always stall ingest data at next day,  and i need to restart it, and then, the data will be collected, today 2023-09-21, i try to place some test file in 2023-09-21 folder by manully before restart , looks like the UF unable to detect them, so i restart it finally, the data was collected.  my input as below: [monitor://mnt/xxx/*/open-development*.log] disabled=0 host=xxxx index=xxx soucetype=xxx _TCP_ROUTING=xxx crcSalt=<SOURCE>   please help try to locate the root cause, thanks so much. please
  Hi @niketn How to perform an auto refresh once domain/data entity is selected to the result populates. Next step when i try to click the second option from dropdown,the older result still remain... See more...
  Hi @niketn How to perform an auto refresh once domain/data entity is selected to the result populates. Next step when i try to click the second option from dropdown,the older result still remain. Query used:       <input type="dropdown" token="tokSystem" searchWhenChanged="true">         <label>Domain Entity</label>         <fieldForLabel>$tokEnvironment$</fieldForLabel>         <fieldForValue>$tokEnvironment$</fieldForValue>         <search>           <query>| makeresults           | eval goodsdevelopment="a",materialdomain="b,c",costsummary="d"</query>         </search>         <change>           <condition match="$label$==&quot;a&quot;">             <set token="inputToken">test</set>             <set token="outputToken">test1</set>           </condition>           <condition match="$label$==&quot;c&quot;">             <set token="inputToken">dev</set>             <set token="outputToken">dev1</set> <condition match="$label$==&quot;m&quot;"> <set token="inputToken">qa</set> <set token="outputToken">qa1</set> </condition> </change> ------ <row> <panel> <html id="messagecount"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">INBOUND </h2> </html> </panel> </row> <row> <panel><table> <search> <query>index=$indexToken1$ source IN ("/*-*-*-$inputToken$")  | timechart count by ObjectType```| stats count by ObjectType</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> ------<row> <panel> </style> <h2 id="user">outBOUND </h2> </html> <chart> <search> <query>index=$indexToken$ source IN ("/*e-f-$outputToken$-*-","*g-$outputToken$-h","i-$outputToken$-j") </query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search>    
Hi @gcusello Could you help me how to change the current data entity dropdown behaviour to multi select option…with the above query i.e second dropdown? 2.Also how do i auto cleared the existing searc... See more...
Hi @gcusello Could you help me how to change the current data entity dropdown behaviour to multi select option…with the above query i.e second dropdown? 2.Also how do i auto cleared the existing search result pannel ,when i select next time the other option from domain entity
Hello, We're using Splunk  Enterprise version 9.1.0.2 and trying to configure Splunk to send email alerts but cannot make it work. We've tried both Gmail and O365, here are the errors: 1. Email s... See more...
Hello, We're using Splunk  Enterprise version 9.1.0.2 and trying to configure Splunk to send email alerts but cannot make it work. We've tried both Gmail and O365, here are the errors: 1. Email settings: Mail host: smtp.gmail.com:587, Enable TLS, enter Username and password (we use app password for smtp.gmail.com) --> Error: sendemail:561 - (530, b'5.7.0 Authentication Required. Learn more at\n5.7.0 https://support.google.com/mail/?p=WantAuthError 5-20020a17090a1a4500b00274e610dbdasm2199058pjl.8 - gsmtp', 'sender@gmail.com') while sending mail to: receive@.... 2. Email settings: Mail host: smtp.office365.com:587, Enable TLS, enter Username and password (username and password can login to Outlook successfully) --> Error: sendemail:561 - (530, b'5.7.57 Client not authenticated to send mail. [SGAP274CA0001.SGPP274.PROD.OUTLOOK.COM 2023-09-21T02:01:45.399Z 08DBB9CB1E03821B]', 'sender@senderdomain.com') while sending mail to: receive@....   Please support. Thank you.
i have download my logs, from my server ,which is encode by "GBK" or GB2312' to my desktop in my computer, and getting the logs data in the platform by using the adding-data  function in the splunk w... See more...
i have download my logs, from my server ,which is encode by "GBK" or GB2312' to my desktop in my computer, and getting the logs data in the platform by using the adding-data  function in the splunk web-ui .the version of my splunk enterprise is 8.2.1.   when i setting the sourcetype in the process of adding the data  , i choose the  charset 'gb2312', and then to search the log in splunk, it correctly displayed.     however, when i  use the  inputs and outputs conf in splunk universal forwader to send the logs to the splunk platform,it correctly displayed but  hava some garbled code.    i have  followed the suggsetion Solved: Can splunk support GBK? - Splunk Community  but it can not solved my problem.  the below is my configuration in my server and UF. server: directory:/opt/splunk/etc/system/local/props.conf [mysourcetype] CHARSET = GB2312 UF: directory:/opt/splunk/etc/apps/search/local/inputs.conf [default] host = 10.31.xx.xx [monitor:///home/abc/log/20230921/*.log] index = abcd sourcetype = mysourcetype disabled = false
I am currently encountering a problem where I have a log file that will be archived to another folder after reaching certain conditions. I have set up UF monitoring for both files, but the data colle... See more...
I am currently encountering a problem where I have a log file that will be archived to another folder after reaching certain conditions. I have set up UF monitoring for both files, but the data collected may be duplicate. However, if I do not monitor the archive folder, some logs in the later positions will be lost in the file. I suspect it may be related to the file being archived too quickly? How to solve this problem for example,my log file is abc.log, and then, it will be archived to current path /debug/abc.1.log, I have set the monitor for both files, but the data is duplicate, however,  if i do not monitor current path /debug/abc.1.log, i will lose the content at the end of the file.    
We are continuing to observe this issue in version 8.2.5. Did Splunk ever fix this in the later versions? We started observing this issue after moving to SmartStore and had not observed this prior. R... See more...
We are continuing to observe this issue in version 8.2.5. Did Splunk ever fix this in the later versions? We started observing this issue after moving to SmartStore and had not observed this prior. Restarting the Cluster Manager fixes the issue but the issue happens again at a later time.  We also do not collect metrics in our splunk environment and I can see a _metrics index on the indexer cluster for some reason
@DexterMarkley  may you provide the location of file needed to be changes?
When attempting to connect to AWS from within the AWS app I am receiving 'SSL validation failed for https://sts.us-east-1.amazonaws.com/ [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: u... See more...
When attempting to connect to AWS from within the AWS app I am receiving 'SSL validation failed for https://sts.us-east-1.amazonaws.com/ [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local certificate (_ssl.c:1106)' I received a cert.pem file from AWS, but I don't know where to put this file in splunk. splunk version is 8.2.8 Splunk Add-on for Amazon Web Services (AWS) version is 7.0.0 guide me where to apply the cert.pem file?   Thank you.  
When did this start?  What changed around that time?  Is your license still valid?  What method are you using to see your license use?  Does your license use the Ingest model or the Workload model?