All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have following data and : ...... 2021-06-18 21:05:45.037 +02:00 [Information] Status=[1111111111111111111111111111111111111111111111111111111111111111]" 2021-06-18 21:05:45.037 +02:00 [Inform... See more...
I have following data and : ...... 2021-06-18 21:05:45.037 +02:00 [Information] Status=[1111111111111111111111111111111111111111111111111111111111111111]" 2021-06-18 21:05:45.037 +02:00 [Information] Status=[1111111111111111111111111111111111111111111111111111111111111111]" 2021-06-18 21:05:45.037 +02:00 [Information] Status=[1111111111111111111111111111111111111111111111111111111111111111]" ..... I need to extract the Status field from log data and each character of value belongs to one Item.  Example: 1 is status OK for Item_1, second 1 is status OK for Item_2, etc. * (Total Items=64) Finally I want to extract the raw data and convert to Status and link to a Item: _time Items Status 2021-06-18 21:05:45.037 Item_1 Ok 2021-06-18 21:05:46.037 Item_1 Not Ok 2021-06-18 21:05:47.037 Item_2 Ok 2021-06-18 21:05:49.037 Item_n .... .... I have tried the following Script without success: source="C:\\Logs\\*" host="392" index="Items" | eval length=len(Status) | where length > 60 | eval Item_1=substr(Status , 2 ,1), Item_2=substr(Status , 3 ,1), Item_3=substr(Status , 4 ,1),Item_4=substr(Status , 5 ,1), |eval fieldnames = "" |foreach Item_* [ eval "<<FIELD>>" = case(<<FIELD>> == 0,"Not Ok",<<FIELD>>== 1, "Ok",<<FIELD>>== 2 ,"Not Known")| eval test=<<FIELD>>] |foreach Dump_Chute* [ eval fieldnames= fieldnames."<<FIELD>>;"] |eval fieldnames = split(fieldnames, ";") |table fieldnames test _time  
Hi community, I'm using splunk 8.0.6 and I've installed the ta on a search head cluster. When alert is triggered I've configured Jira Issue Alert but no ticket is created on Jira and I see this err... See more...
Hi community, I'm using splunk 8.0.6 and I've installed the ta on a search head cluster. When alert is triggered I've configured Jira Issue Alert but no ticket is created on Jira and I see this error on jira_issue_alert_modalert.log     2021-06-24 16:24:10,665 ERROR pid=20944 tid=MainThread file=cim_actions.py:message:425 | sendmodaction - worker="splkcisosh1" signature="Unexpected error: HTTPSConnectionPool(host='jiraurl', port=443): Max retries exceeded with url: /rest/api/2/serverInfo (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f384b80dd10>: Failed to establish a new connection: [Errno 111] Connection refused'))." action_name="jira_issue_alert" search_name="CRM_Stargate_ID2 Clone" sid="scheduler__benedettim__search__RMD59d66eff7707ced3c_at_1624544640_299_522794B8-61DC-4346-ACD8-CA61702DA811" rid="0" app="search" user="benedettim" digest_mode="1" action_mode="saved" action_status="failure"     The proxy is configured though since in ta_atlassian_jira_issue_alerts_settings.conf I see this:   [proxy] proxy_enabled = 1 proxy_password = ******** proxy_port = 8080 proxy_rdns = 1 proxy_type = http proxy_url = proxy proxy_username = username   I've tried to open a ticket using postman with the same user and is working fine.   Do you have any idea? Thanks Marta
Currently trying to work out a search that would allow me to generate a notable event if a user has made successful connections to Okta from different IP's during within the same timespan.  I tried s... See more...
Currently trying to work out a search that would allow me to generate a notable event if a user has made successful connections to Okta from different IP's during within the same timespan.  I tried searching the boards but couldn't come up with something that matched my scenario.  TLDR: How do I create the notable for when the same user has 2 or more sessions from different IP's? Example: UserA connects to okta and creates a session for 3 hours (12 pm - 3pm) from 1.1.1.1.  UserA connects to okta and creates a session for 6 hours (2pm - 8pm) from 2.2.2.2 I've tried concurrency, and have also tried using transaction based on sessions but can't seem to tie it together.   I'm also just starting out with Splunk so i'm still learning it all.  Any help would be appreciated.   index=okta eventtype=okta_log_authentication | rename authenticationContext.externalSessionId as session | transaction session startswith="user.session.start" endswith="user.session.end" | streamstats count(session) AS TotalSession by src_ip, user | search TotalSession>=2 | table _time src_ip user duration TotalSession                
Need help with a SPL to get complete list of all hosts (name, IP, OS), routers, switches & other devices. In Splunk Ent. to include status (up or down). I tries MC but it will not list IPs. Thank u i... See more...
Need help with a SPL to get complete list of all hosts (name, IP, OS), routers, switches & other devices. In Splunk Ent. to include status (up or down). I tries MC but it will not list IPs. Thank u in advance.
Hi, Is there a way to change the width of column charts? I have 4 trellis column charts that are currently displayed vertically. What i'd like to do is to display two of them side by side. What i h... See more...
Hi, Is there a way to change the width of column charts? I have 4 trellis column charts that are currently displayed vertically. What i'd like to do is to display two of them side by side. What i have:   What i would like to have:                     Thank you very much in advance.
Good day all.  We are trying to use "Microsoft 365 Defender Add-on for Splunk" in a Splunk Gov. Cloud instance.  I have been asked to contact the developer but have been unable to find a way to conta... See more...
Good day all.  We are trying to use "Microsoft 365 Defender Add-on for Splunk" in a Splunk Gov. Cloud instance.  I have been asked to contact the developer but have been unable to find a way to contact them.  I was hoping maybe someone would have some suggestions as to how to contact them.  The developer is listed as "Splunk Works" and then if you click on that it takes you to "Splunklabs".  Any suggestions will be greatly appreciated. ~Lee
Hi all, Is there a way to configure the Introspection Partition monitor to check other filesystems rather than just the hotwarm & cold FS's? We have a separate filesystem for our Frozen buckets wit... See more...
Hi all, Is there a way to configure the Introspection Partition monitor to check other filesystems rather than just the hotwarm & cold FS's? We have a separate filesystem for our Frozen buckets with scripts to delete buckets over a certain number of days old. Mostly this works well in our cluster but, occasionally, these fill up and prevent the cold buckets from being rolled over. This, then causes other problems. I'd like to be able to monitor the frozen FS to try and predict / prevent a full FS before it happens. I would rather use the Introspection app if possible. Cheers, Mark.
Hi,  I am trying to configure the TA-QualysCloudPlatform add-on, but I am getting an error when im trying to configure the add-on. I have installed it on my HF. When I enter the Qualys API Server ur... See more...
Hi,  I am trying to configure the TA-QualysCloudPlatform add-on, but I am getting an error when im trying to configure the add-on. I have installed it on my HF. When I enter the Qualys API Server url and enter the username and password, and then hit save it gives me the following error ERROR: In handler 'passwords': Does not exist: /nobody/TA-QualysCloudPlatform/passwords/credential::splunk_ta_o365_syncreon_0365_client_secret: I am entering the correct username and password. But even if i enter any password it still gives the error. From reading other articles on this add-on, there has been issues in the past with the add-on not overwriting older saved credentials but it seems that it is not the issue here as the credentials are not even being saved. (There is no passwords.conf being written to the /local folder) Also, there is no logs written to the /var/log/splunk/ta-QualysCloudPlatform.log log file. The file is empty. So i cant troubleshoot any errors. I am running the lastest version of the add-on - 1.8.8, and have even tried using older versions (as far back as 1.8.3) but the problem still occurs. Does anyone know why this is happening? Thanks!
I have ran a scheduled search in my Splunk. I have checked the status of the job under my Splunk Enterprise -> Activity -> Jobs. Here I could see the job has ran and the status is "Done". But when I... See more...
I have ran a scheduled search in my Splunk. I have checked the status of the job under my Splunk Enterprise -> Activity -> Jobs. Here I could see the job has ran and the status is "Done". But when I search the log for the run history of same job under "_internal" index and the sourcetype "scheduler", I don't have any logs for the run of respective scheduled search job. I can see my job has completed with status as done under Activity but under scheduler logs, I don't have any proof that such a job has ran at a particular time. Is this denoting some issue over here? Or am I missing something here in the way of checking the logs. Could someone please help me on this as soon as possible.
data missing in chart dashboard view Hi, I created a chart view with multiple values to show how many users are connected per months through the current year. My problem is when I select an year pe... See more...
data missing in chart dashboard view Hi, I created a chart view with multiple values to show how many users are connected per months through the current year. My problem is when I select an year period some data are missing because the value per month is not the same if I select only th month. this my command for the chart | eval mois=strftime(_time, "%m") | dedup user | chart count(eval(source_user="external" AND user_ecole="false")) AS external count(eval(source_user="internal")) AS internal count(eval(source_user="external" AND user_ecole="true")) AS external_ecole by mois | rename mois AS "Mois" I show you the screen shot it will be more easy to understand capture with an period set for 1 year, per example the january month (01) show  internal=6 external=1 theses results are false.   And if I select only the january month the result is right internal=42 external=41 Are there an limitation with the amoun of data to analyse ? why the result is not complete when I select an time value of 1 year ? thank you for your help   regards,
I have developed the setup page of my TA using setupview.  On Splunk instance, I have installed my TA and other TAs also. I am fetching storage credentials of my TA's "password.conf" file using  Spl... See more...
I have developed the setup page of my TA using setupview.  On Splunk instance, I have installed my TA and other TAs also. I am fetching storage credentials of my TA's "password.conf" file using  Splunk JS SDK's storagePasswords. But this returns storage credentials of "password.conf" from other TAs that are installed on Splunk instance. How to get storage credentials of only my TA's "password.conf" file using  Splunk JS SDK's storagePasswords?  
Hi Team I am looking to get two different field values in a single query in Splunk, example, I have two different country codes and would like to get both their values  Request.countrycode=EU Requ... See more...
Hi Team I am looking to get two different field values in a single query in Splunk, example, I have two different country codes and would like to get both their values  Request.countrycode=EU Request.country=SW “Or” command is not working for this, any suggestions team ?  
Hi Guys,  I'd like to exclude the logs with same time for the search results _time                                    _raw  2021-5-26 00:00:00      A 2021-5-26 00:00:00      B 2021-5-26 00:05:00... See more...
Hi Guys,  I'd like to exclude the logs with same time for the search results _time                                    _raw  2021-5-26 00:00:00      A 2021-5-26 00:00:00      B 2021-5-26 00:05:00      C 2021-5-26 00:10:05      D What I'd like to see: _time                                    downtime 2021-5-26 00:05:00      C 2021-5-26 00:10:05      D   Would you please share how I can do this? Thanks!
Hi, I've started using HEC to push data to my Splunk Enterprise instance and noticed the errors I get. For example, sending this:   {"aa": "hello world"}   Results in:   { "text": "No data"... See more...
Hi, I've started using HEC to push data to my Splunk Enterprise instance and noticed the errors I get. For example, sending this:   {"aa": "hello world"}   Results in:   { "text": "No data", "code": 5 }     However, when sending events in batches, I will only get this error if the first event I send is problematic:   {"event": "hello world"} {"aa": "hello world"}   Results in:   { "text": "Success", "code": 0 }     Because I need to know that all my events were sent successfully (and "acks" are not an option, considering I send data to Splunk Cloud as well), is there anything I can do (other than sending each event by itself)?
Hello, Currently encounter 1 issue which unable to split 2 multivalues fields concurrently. my table: Product color description1 description2 phone blue location : loc_a date: date_a... See more...
Hello, Currently encounter 1 issue which unable to split 2 multivalues fields concurrently. my table: Product color description1 description2 phone blue location : loc_a date: date_a serial_no: serial_a location : loc_a date: date_q phone yellow location : loc_d date: date_e serial_no: serial_f location : loc_x date: date_y serial_no: serial_z   desired output: Product color description1 description2 phone blue location : loc_a location : loc_a phone blue date:date_a date: date_q phone blue serial_no: serial_a   phone yellow location : loc_d location : loc_x phone yellow date: date_e date: date_y phone yellow serial_no: serial_f serial_no: serial_z   im able to find a function called  mvexpand but sadly it cannot split 2 field. Regards, Yu Ming
Hi I have configured the standalone windows service on AppD .Net agent and I tried to discover the transactions using the Live Preview (Instrument-> Transaction detection). But nothing has returne... See more...
Hi I have configured the standalone windows service on AppD .Net agent and I tried to discover the transactions using the Live Preview (Instrument-> Transaction detection). But nothing has returned during the application runtime. I tied with uni-instrumented code and classes with a name that contains. But it has not returned anything. Is there a way to discover the entry points and instruments?
i am trying to check the splunk forwarder on AWS ARM machines and while doing so when i trying to start splunk service getting this  error "-bash: ./splunk: /lib/ld-linux.so.3: bad ELF interpreter: N... See more...
i am trying to check the splunk forwarder on AWS ARM machines and while doing so when i trying to start splunk service getting this  error "-bash: ./splunk: /lib/ld-linux.so.3: bad ELF interpreter: No such file or directory"  i m running this command "./splunk start --accept-license" i am using Splunk 8.0.3 ARMv6 on ARM machine. please let me know why it is comming.
Hi I have an alert that triggers an action to run a script. While the body of script is working, the passing  of argument from splunk to my script is not working.  Please note that I use bash script... See more...
Hi I have an alert that triggers an action to run a script. While the body of script is working, the passing  of argument from splunk to my script is not working.  Please note that I use bash scripting and whenever I call argument $8 its value is empty.
Hi Experts, I need suggestions on how we can send email alerts for specific error codes for a Business Transaction. BT and Error detection is working as expected, the question is only on alertin... See more...
Hi Experts, I need suggestions on how we can send email alerts for specific error codes for a Business Transaction. BT and Error detection is working as expected, the question is only on alerting part. Consider, BT1 have errors reported as 400,500,300,XXX. The requirement is to get an email alert when 500 errors are more than 1 for BT1. Any pointers will be helpful. Regards, Sahil 
Hi, Currently facing issue to ingest log from SolarWind using the "SolarWinds Add-on for Splunk". From the internal log getting error "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verification faile... See more...
Hi, Currently facing issue to ingest log from SolarWind using the "SolarWinds Add-on for Splunk". From the internal log getting error "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verification failed". Any support and assistance can help to rectify this issue.