All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Well I guess it is a bug, then.  There are quite a few bugs.
@abhi04  @abhi04 Hello Abhi, To on-board the logs you have to use Splunk Add-ons not Apps. They handle tasks related to data ingestion, parsing, extraction etc. Splunk-certified or written TAs (tech... See more...
@abhi04  @abhi04 Hello Abhi, To on-board the logs you have to use Splunk Add-ons not Apps. They handle tasks related to data ingestion, parsing, extraction etc. Splunk-certified or written TAs (technology add-ons) adhere to the Common Information Model (CIM) and are often used for data parsing. An app in Splunk provides a front-end interface for visualizing data. It’s like a user-friendly dashboard that allows you to explore and analyze information. If this reply helps you, Karma would be appreciated. !!! 
Hi Team, how can I ingest Genesys cloud logs into splunk? I see two apps 1.  Genesys Pulse Add-on for Splunk https://splunkbase.splunk.com/app/5255 2. Genesys Cloud Operational Analytics App  ht... See more...
Hi Team, how can I ingest Genesys cloud logs into splunk? I see two apps 1.  Genesys Pulse Add-on for Splunk https://splunkbase.splunk.com/app/5255 2. Genesys Cloud Operational Analytics App  https://splunkbase.splunk.com/app/6552 For the Genesys Pulse Add-on for Splunk, I was able to see here we need to setup the configuration but for Genesys Cloud Operational Analytics App i dont follow the setup configuration.   Which app should be used for Genesys Cloud log ingestion into splunk Cloud?
Hi @abroun, probably this is the only case where join could be the best solution:   some-search | join type=left id [ search some-search-index $id$ | eval epoch = _time | where epoch ... See more...
Hi @abroun, probably this is the only case where join could be the best solution:   some-search | join type=left id [ search some-search-index $id$ | eval epoch = _time | where epoch < $timestamp$ | sort BY _time | head 1 | fields id status type ] | table id time status type   Ciao. Giuseppe
Hi fellow Splunkers, i recently came across an authentication Token created by splunk-system-user and i had no clue where this token came from and my splunkadmin colleagues didnt created the token... See more...
Hi fellow Splunkers, i recently came across an authentication Token created by splunk-system-user and i had no clue where this token came from and my splunkadmin colleagues didnt created the token either. Is it a feature/normal that Splunk will generate a Token every single time you click on "view on mobile" from the menu of a xml dashboard? Can we turn it off?  We dont want users to be able to freely create an infinite amount of authentication tokens, because it would make overview of tokens way harder and we dont have configured the secure gateway. 
I have smart card authentication enabled on my onprem enterprise system.  I'm using the built in capability that Splunk has now, not using Apache.  Been working great but when I upgraded my system fr... See more...
I have smart card authentication enabled on my onprem enterprise system.  I'm using the built in capability that Splunk has now, not using Apache.  Been working great but when I upgraded my system from 9.0.3 to 9.2.1, it get an Unauthorized error when trying to logon.  I changed requireclientcert to false so I could logon with username and password.  Checked all my LDAP settings and everything looks the same.  Even added another DNS to see if that would change anything but no luck, still getting the unauthorized error.
Thanks @danspav for your response. First of all, I didn't mention that I'm using Splunk Enterprise 9.0.6 if that makes a difference. The provided XML code is similar to the one originally posted exc... See more...
Thanks @danspav for your response. First of all, I didn't mention that I'm using Splunk Enterprise 9.0.6 if that makes a difference. The provided XML code is similar to the one originally posted except it removes the <set> element. Although I tried it and when a button was clicked it is adding the following "form.link_dash" parameter to the main dashboards URL: /app/search/dash_main?form.link_dash=dash_a With this modified URL now in the URL bar, if the browser reload button is pressed it is opening a new tab to dash_a after loading and rendering the main dashboard, as if the button was clicked. It is like prefilling the button value from the URL parameter.
I'm trying to install “Cisco Networks App for Splunk Enterprise” and “Cisco Networks Add-on for Splunk Enterprise” in SplunkCloud  Version:9.1.2308.203Build:d153a0fad666 but is not possible. When I ... See more...
I'm trying to install “Cisco Networks App for Splunk Enterprise” and “Cisco Networks Add-on for Splunk Enterprise” in SplunkCloud  Version:9.1.2308.203Build:d153a0fad666 but is not possible. When I try and search them they do not appear, and If I try to upload them I receive a message informing: "This app is available for installation directly from Splunkbase. To install this app, use the App Browser page in Splunk Web." but there is nowhere to be found.
It looks like "epoch_password_last_modified" is a multivalue field; assuming you want to continue processing this a set of multivalue fields (although I think you might be better off expanding to ind... See more...
It looks like "epoch_password_last_modified" is a multivalue field; assuming you want to continue processing this a set of multivalue fields (although I think you might be better off expanding to individual events or not creating the multivalue fields in the first place), you could try something like this | eval time_difference=mvmap(epoch_password_last_modified, epoch_current_time - epoch_password_last_modified)
I've found the solution. The problem was mine. If I put : "testcsv.csv" -> it doesn't work. But if I remove the ".csv", it works perfectly... Thanks for your reply.
@siemless  This may be best to discuss in the Slack Users but I could not find you so I will respond here. In some cases, I have boxes where the /opt/splunk dir is mounted to a separate drive w/ mo... See more...
@siemless  This may be best to discuss in the Slack Users but I could not find you so I will respond here. In some cases, I have boxes where the /opt/splunk dir is mounted to a separate drive w/ mount point /opt/splunk.  In that case you can just swap the disk, I learned this method from AWS support.  But that takes preplanning. In some cases, I have boxes that are jacked up, either volume issues across multiple disks or  just not setup to swap disks.  In that case you can use the Splunk docs >>> https://docs.splunk.com/Documentation/Splunk/9.2.1/Installation/MigrateaSplunkinstance I argued with Splunk about the documentation steps but they claim the steps are correct, although I still believe confusing.  FWIW this is what I did... 1 >Create a new host with new OS (in my case I  rename /re-IP to the original afterward). 2 > Install the same version of Splunk on new host (I used a .tar), set systemd, set same admin pwd, then stop Splunkd, maybe test a restart and reboot, to verify. 3 > Stop Splunkd on old host, tar up /opt/splunk, copy over the old.tar to new box, untar over the new install, then start Splunkd. That worked for me, and going fwd all new hosts will be configured for the disk-swappable process. Good luck
Hello, Unfortunately, I've used your exact method and the result doesn't work. I do have my line indicating my "url". But nothing in "type" nor in its "count". Maybe I made a mistake by indicatin... See more...
Hello, Unfortunately, I've used your exact method and the result doesn't work. I do have my line indicating my "url". But nothing in "type" nor in its "count". Maybe I made a mistake by indicating the wrong "destination app" when creating the "lookup definition"? What should I put? Thanks Regards
Hello Everyone I'm trying to calculate the "time_difference" between one column and another in Splunk. The problem is that the value from which I substract something is current time and when I use t... See more...
Hello Everyone I'm trying to calculate the "time_difference" between one column and another in Splunk. The problem is that the value from which I substract something is current time and when I use the current time value it is shown in a table as one event (epoch_current_time). Therefore when I substract value "epoch_password_last_modified" from "epoch_current_time" i get no results. Is there a way to make "epoch_current_time" visible each time in each row like "epoch_password_last_modified" value?  
Try something like this | eval check=replace($token$," +","\s+") | where match(ID, check)
Try tostring duration |eventstats min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time by correlationId | eval StartTime=round(strptime(Logon_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=r... See more...
Try tostring duration |eventstats min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time by correlationId | eval StartTime=round(strptime(Logon_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(Logoff_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=tostring(ElapsedTimeInSecs,"duration")
If you didn't want multiple lines. This is how I would accomplish the same thing. | sort -elapseJobTime | stats list(eval(myindex(JOBNAME,0,2))) as JOBNAME list(eval(mvindex(JOBID,0,2))) as JO... See more...
If you didn't want multiple lines. This is how I would accomplish the same thing. | sort -elapseJobTime | stats list(eval(myindex(JOBNAME,0,2))) as JOBNAME list(eval(mvindex(JOBID,0,2))) as JOBID list(eval(myindex(elapseJobTime,0,2))) as elapseJobTime by desc  
Response received form support:  This issue is currently being raised with our internal teams to look into changing the behavior of these checks in a future release of the app to avoid this erro... See more...
Response received form support:  This issue is currently being raised with our internal teams to look into changing the behavior of these checks in a future release of the app to avoid this error from appearing in the logs. Looking at the options available in Splunk regarding the application, there does not seem to be a way to limit the polling frequency, that in turn would reduce the number of errors appearing.
Try filtering the results on the date_hour field. index=* host=* | where date_hour>=18 AND date_hour<21 | eval pctCPU=if(CPU="all",100-pctIdle,Value) | timechart avg(pctCPU) AS avgCPU BY host
Assuming traceid and message have already been extracted form the JSON, try something like this | rex field=message "Error_Request_Response for URI: (?<uri>[^,]+), and Exception Occurred: (?<excepti... See more...
Assuming traceid and message have already been extracted form the JSON, try something like this | rex field=message "Error_Request_Response for URI: (?<uri>[^,]+), and Exception Occurred: (?<exception>[^,]+)," | table traceId uri exception
I have a dataset of user data including the user's LastLogin. The LastLogin field is slightly oddly formatted but very regular in it's pattern. I wish to calculate the number of days since LastLogin.... See more...
I have a dataset of user data including the user's LastLogin. The LastLogin field is slightly oddly formatted but very regular in it's pattern. I wish to calculate the number of days since LastLogin. This should be super simple. What is bizarre is that in a contrived example using makeresults it works perfectly.   | makeresults | eval LastLogin="Mar 20, 2024, 16:40" | eval lastactive=strptime(LastLogin, "%b %d, %Y, %H:%M") | eval dayslastactive=round((now() - lastactive) / 86000, 0)   This yields: But with the actual results the same transformations do not work.    | inputlookup MSOLUsers | where match(onPremisesDistinguishedName, "OU=Users") | where not isnull(LastLogin) | eval LastActive=strptime(LastLogin, "%b %d, %Y, %H:%M") | eval DaysLastActive=round((now() - LastActive) / 86000, 0) | fields Company, Department, DisplayName, LastLogin, LastActive, DaysLastActive   This yields: What am I missing? Cutting and pasting the strings into the makeresults form gives what I would expect.