All Topics

Top

All Topics

Why the learning portal pages never load. the product is very slow to use but didn't know the learning pages also would have such issues. looks it's the characteristic of splunk.
I want to rename row value by data case. (It is line chart) The line chart row name changed  by token $value$ if value is "iron" -> row must rename as "metal" -> and graph line become "black" if v... See more...
I want to rename row value by data case. (It is line chart) The line chart row name changed  by token $value$ if value is "iron" -> row must rename as "metal" -> and graph line become "black" if value is "steak" -> row must rename as "food". -> and graph line become "red" so I wrote the code like this, but it's not work at all. <search> <query> ... |eval dt = case("$value$" == "iron", "metal", 1=1, "food") |rename "row 1" as dt ... </query> </search> <option name="charting.fieldColors">{"metal": 0xffffff, "food" : 0xFF0000}</option>  How could I solve this problem?
I have a set of data that I upload into Splunk every morning as a .csv file because the tool doesn't feed the particular data automatically.  It is a list of agents installed on assets.  I use a save... See more...
I have a set of data that I upload into Splunk every morning as a .csv file because the tool doesn't feed the particular data automatically.  It is a list of agents installed on assets.  I use a savedsearch to query the data because I latest() and stats every field to make sure it is the latest record in the database, it's a pretty big query.  I am interested in forcing the data to look like it was just ingested every time it is queried (when the savedsearch is executed).  I tried adding a field to the savedsearch called _time (also tried timestamp) and setting it now(), that worked for the display but my records are still going stale so my assumption is that Splunk is using the original timestamp for these records (and I am ASSUMING I cannot change that).  While the query works for the 24 hour timeframe or if I set the timeframe to the current day, all is good (until the day passes).  But if I shorten the timeframe to last 15 minutes, last 4 hours, last 60 minutes, I get nothing from the query, which makes sense because that data was timestamped outside the range.  But I need the timstamp to look like right now. The data is timestamped when I upload it using the current date/time.  I want the last upload to ALWAYS be the current set of records (including the time).  It should work with any timeframe.  In a perfect world I would like the timestamp to be when the query is executed, so it is always now(), making the data 'look' like it is fresh.  Is there a way to do this?
Hello,   I am developing a new app for Splunk (enterprise and cloud platform). I would like to understand how can I publish this app as "Splunk supported"?  Any inputs highly appreciated with karm... See more...
Hello,   I am developing a new app for Splunk (enterprise and cloud platform). I would like to understand how can I publish this app as "Splunk supported"?  Any inputs highly appreciated with karma   Thanks, Mohsin
Hello All, I would like some suggestions. I am trying to search the Cisco ASA sourcetype in Splunk for the current users that are logged in to an ASA. I am trying to use "last 24 hours" as the start... See more...
Hello All, I would like some suggestions. I am trying to search the Cisco ASA sourcetype in Splunk for the current users that are logged in to an ASA. I am trying to use "last 24 hours" as the start time range. I am trying to count login message (113004) and compare with logout message (722023). There are other message ids for logout:716002, 113019. It seems that I need the "latest" login. The user can log in and log out - I can get pairs by user for a login/logout. If there is a pair like that I can assume the user is not logged in. The challenge comes with the fact the user can have logged in and out multiple times. My theory is that if a user has more logins that logouts (assuming there is a login later than a log out). this should be a user that is logged on. The trick seems to be finding the latest log in per user, and counting values in the "message_id" field. Any suggestions here? I  can check users on the ASA using the "show vpn-session summary" command, but that number almost never matches my searches. I can use any suggestion. Thanks eholz1  
Hello everyone, I hope you are doing well. I have a request for your support. I have multiple reports that I need to save locally in a folder on the server where Splunk is hosted. I would greatly ap... See more...
Hello everyone, I hope you are doing well. I have a request for your support. I have multiple reports that I need to save locally in a folder on the server where Splunk is hosted. I would greatly appreciate your assistance in validating the process. Thank you very much.
Hi there ,  I created a splunk dashboard (classic) , which I want to download/export as PDF. However , I am unable to do same as trellis are not supported with PDF export. Also when I try to print/e... See more...
Hi there ,  I created a splunk dashboard (classic) , which I want to download/export as PDF. However , I am unable to do same as trellis are not supported with PDF export. Also when I try to print/export - the dashboard widgets/panel's look gets hampered. Hence , I need help to explore the best way by which I can download dashboard view as same shown in classic view dark theme , kind of a snap view or in an image format sothat all graphs , look n feel will intact same. Also is it possible to schedule sending email about the same downloaded dashboard image.
Hi Splunk Experts, I want to search for a word and then print the current matching line & the immediate next line. Kindly assist. Thanks in advance!!   Note: My events are Single-Line events.
I'd  love to be able to dynamically adjust the timespan in  a sparkline, as in   ...| eval timespan=tostring(round((now()-strptime("2023-07-26T09:45:06.00","%Y-%m-%dT%H:%M:%S.%N"))/6000))+"m" | ... See more...
I'd  love to be able to dynamically adjust the timespan in  a sparkline, as in   ...| eval timespan=tostring(round((now()-strptime("2023-07-26T09:45:06.00","%Y-%m-%dT%H:%M:%S.%N"))/6000))+"m" | chart sparkline(count,timespan) as Sparkline, count by src_ip   However, sparklines do not accept timespans in string format, and the example above results in the following error message:   Error in 'chart' command: Invalid timespan specified for sparkline.   Any suggestions? I see that this question was asked back in 2019, but I couldn't find the answer.
In my previous post I was advised to deploy Windows TA via Deployment Server which I did and the app is installed on the servers I want. However the issue is I deployed the app and there is no inform... See more...
In my previous post I was advised to deploy Windows TA via Deployment Server which I did and the app is installed on the servers I want. However the issue is I deployed the app and there is no information being forwarded to the server with Windows events. Both client and server are able to communicate with one another and the default port for Splunk is open on 9997/tcp. I have edited the inputs.conf file in both the app and the actual SplunkForwarder local folder. I have set the various logs I want in the inputs file to disabled = 0 and still no data comes through to my indexes.
Hi Everyone,Could you please help me break below events  Expected Events: Subject : ABCD FriendlyName : ABCD Issuer : ABCD Thumbprint : 3CBB2CACD16 NotAfter : 2025 Expires in (Days) : 0 ForSp... See more...
Hi Everyone,Could you please help me break below events  Expected Events: Subject : ABCD FriendlyName : ABCD Issuer : ABCD Thumbprint : 3CBB2CACD16 NotAfter : 2025 Expires in (Days) : 0 ForSplunk : Break Events which is getting received: NotAfter : 2025 Expires in (Days) : 0 ForSplunk : Break Subject : ABCD FriendlyName :ABCD Issuer : ABCD Thumbprint :3CBB2CACD16 Subject : ABCD FriendlyName :ABCD Issuer : ABCD Thumbprint : 3CBB2CACD16 NotAfter : 2025 Expires in (Days) : 68 ForSplunk : Break I want my Events to break after FOR SPLUNK : BREAK but its creating issue for some of the events and not for all.I dont know why its working in some cases and not working in some of the cases.   This is there in my props.conf [MY-SOURCETYPE] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = custom pulldown_type = 1 TIME_FORMAT = %Y-%m-%d_%H:%M:%S_%p SHOULD_LINEMERGE = true MUST_BREAK_AFTER = Break disabled = false
Hello, I don't find the minimum supported version of the Splunk Universal Forwarder for indexer discovery capability. Can you help me? Thanks in advance.
I have some questions regarding data trim. From which version  data trim has been added? What is the parameter  to trim the data like how much storage used be filled in order  to do the data trim? ... See more...
I have some questions regarding data trim. From which version  data trim has been added? What is the parameter  to trim the data like how much storage used be filled in order  to do the data trim? Can we stop data trim? or how can we know that data is about to get trim    
 My base search  PAGE_ID=* | where PAGE_ID=DGEFH  OR  PAGE_ID =RGHJH  NOT NUM_OF_MONTHS_RUN>=6 AND NOT NUM_OF_INDIVIDUALS_ON_CASE>=4 | eventstats perc99(TRAN_TIME_MS) as Percentile by PAGE_ID | eva... See more...
 My base search  PAGE_ID=* | where PAGE_ID=DGEFH  OR  PAGE_ID =RGHJH  NOT NUM_OF_MONTHS_RUN>=6 AND NOT NUM_OF_INDIVIDUALS_ON_CASE>=4 | eventstats perc99(TRAN_TIME_MS) as Percentile by PAGE_ID | eval timeinsecs= round((TRAN_TIME_MS/1000),2) | stats count(eval(timeinsecs <=8)) AS countofpases count(timeinsecs) as totalcount by PAGE_CATEGORY | eval sla= (countofpases/totalcount)*100 | table sla   I wanted to include all the PAGE_ID and the also use the criteria for the PAGE_ID=DGEFH  and  PAGE_ID =RGHJH  
Hello, everyone! I have search, which ends in such way ... | table id, name | outputlookup my_lookup.csv so my search get such results id name 1 John 2 Mark ... See more...
Hello, everyone! I have search, which ends in such way ... | table id, name | outputlookup my_lookup.csv so my search get such results id name 1 John 2 Mark 3 James Now, I want to record only NEW id's from search  to lookup, which weren't there Is it possible to make without reworking search?
Hi, I’m trying to monitor changing log files within directories that change regularly. These log files are 7 layers deep on a Netapp share. I’m setting up the monitor stanza in inputs.conf on a Linux... See more...
Hi, I’m trying to monitor changing log files within directories that change regularly. These log files are 7 layers deep on a Netapp share. I’m setting up the monitor stanza in inputs.conf on a Linux box. I have tried everything but only certain directories at the 3rd layer are being monitored, but not others at that same layer, including the one I need. I’ve added recursive = true and tried all variations of syntax with no luck. All permissions are the same as directories that can be monitored. What am I doing wrong? Thanks in advance.
I know the data is there, and that this question is possible through the Chargeback app - but has anyone performed SPL query of their environment to be able to predict, based on current ingest rates,... See more...
I know the data is there, and that this question is possible through the Chargeback app - but has anyone performed SPL query of their environment to be able to predict, based on current ingest rates, and retention policies, what my index sizes will be in my DDAS storage?  I am trying to develop a good understand of where I should be topping out, after events age out and move to DDAA storage.
Hello what is the capability so the user will be able to upload file with "add data" option ?
Dynamic Dashboard Analytics makes it easier to troubleshoot issues using Splunk Observability Cloud dashboards. With a simple chart configuration, you can ensure that viewers of your dashboard will b... See more...
Dynamic Dashboard Analytics makes it easier to troubleshoot issues using Splunk Observability Cloud dashboards. With a simple chart configuration, you can ensure that viewers of your dashboard will be able to find the source of problems faster. Splunk Observability dashboards offer a way to pull your data together into a single, flexible view to show how things are going in your environment. Features like dashboard variables, mirroring, and permissions help you curate views for your team or to be shared throughout your organization. One common use case when building out dashboards is to show the number of errors that have occurred over a time period. Users want to be able to find a time when the errors were occurring to help pinpoint the source of the problem. Prior to today, users needed to adjust the configuration for each chart each time they wanted to see errors over a different time period. With dynamic dashboard analytics, users only have to adjust the dashboard’s time window and charts that have been configured to change with the time window will automatically update. While this is available for all charts, it is especially useful for single value charts, list charts, and table charts. Quickly configure a chart for Dynamic Dashboard Analytics To use Dynamic Dashboard Analytics, first click to edit a chart and then click on Add Analytics. Select Transformation and then Dashboard Window. No other configuration is required. By doing this, the chart will automatically update with the latest data when the dashboard time is changed. Get started today in Observability Cloud or start a 14 day free trial.
Hi All,I am running a dashboard which returns the total count(stats count) of field mentioning Severity=ok or Severity=Critical. The requirement is if atealst one field value is Severity=Critical, ... See more...
Hi All,I am running a dashboard which returns the total count(stats count) of field mentioning Severity=ok or Severity=Critical. The requirement is if atealst one field value is Severity=Critical, the color of the panel should turn to Red otherwise Green when Severity=Ok.   Can someone please suggest.