All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have one more problem here.. I want importance notice to be on the top. But here dropdowns are o n top. Tried to create a role and panel for dropdowns so that they will come below notice message bu... See more...
I have one more problem here.. I want importance notice to be on the top. But here dropdowns are o n top. Tried to create a role and panel for dropdowns so that they will come below notice message but I am not able to give submit button clearly. Can someone help me with this?  
<form version="1.1" theme="light"> <!-- Dashboard Name --> <label>Dashboard</label> <!-- Search Panel BEGIN --> <!-- Search Panel END --> <!-- Table BEGIN --> <row> <panel> <htm... See more...
<form version="1.1" theme="light"> <!-- Dashboard Name --> <label>Dashboard</label> <!-- Search Panel BEGIN --> <!-- Search Panel END --> <!-- Table BEGIN --> <row> <panel> <html> <div style=" background: linear-gradient(120deg,#fff5f5 0%,#fff 100%); border-left: 6px solid #ff9800; box-shadow: 0 2px 6px rgba(0,0,0,.12); border-radius: 6px; padding: 18px 24px; font-family: -apple-system,BlinkMacSystemFont,Segoe UI,Helvetica,Arial,sans-serif; font-size: 15px; line-height: 1.45;"> <h3 style="color:#d84315; margin:0 0 8px 0; display:flex; align-items:center;"> <!-- unicode icon (search engine–friendly, scales with text size) --> <span style="font-size:32px; margin-right:12px;">⚠️</span> Important Notice </h3> <p style="margin:0 0 10px 0; color:#424242;"> Avoid running the dashboard for long date ranges <strong>(Last 30 days)</strong> unless strictly needed – it may impact performance. Use shorter ranges for faster results. </p> <p style="margin:0; color:#424242;"> Please ensure an <strong>Index Name</strong> is selected - this is required to load dashboard data. </p> </div> </html> </panel> </row> <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="index"> <label>Enter your Index Name</label> <fieldForLabel>index</fieldForLabel> <fieldForValue>index</fieldForValue> <search> <query> ------ |stats count by index</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> </input> <input type="text" token="support_id_tok" searchWhenChanged="false"> <label>Enter support_id</label> </input> <input type="time" token="time_range" searchWhenChanged="false"> <label>Select time range</label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <html> <a class="btn btn-primary pull-left" href="/app/search/------">Reset</a> </html> </panel> </row> <row depends="$support_id_tok$"> <panel> <html> <div style="display: flex; justify-content: space-between; border-bottom: 1px solid #ccc; padding-bottom: 5px; padding-right: 150px; margin-bottom: 10px;"> -------------Rest of the dashboard------------------
@abhi04  Try to include Status field explicitly, index=xxxxx source="yyyyy" | eval UpStatus=if(Status=="up",1,0) | stats last(UpStatus) as val, latest(Status) as Status by Instance host Regards, ... See more...
@abhi04  Try to include Status field explicitly, index=xxxxx source="yyyyy" | eval UpStatus=if(Status=="up",1,0) | stats last(UpStatus) as val, latest(Status) as Status by Instance host Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@splunklearner  Can you try below, You can reduce font-size further if needed. <dashboard version="1.1" theme="light"> <label>Your dashboard name</label> <!-- ===== NOTICE PANEL ===== --> <row... See more...
@splunklearner  Can you try below, You can reduce font-size further if needed. <dashboard version="1.1" theme="light"> <label>Your dashboard name</label> <!-- ===== NOTICE PANEL ===== --> <row> <panel> <html> <style> .compact-warning { background-color: #fff3cd; border-left: 4px solid #ffa500; padding: 10px 15px; font-family: Arial, sans-serif; font-size: 13px; margin-bottom: 5px; border-radius: 4px; max-width: 800px; } .compact-warning h3 { color: #d9534f; margin: 0 0 5px 0; font-size: 12px; } .compact-warning p { margin: 3px 0; } </style> <div class="compact-warning"> <h3>⚠️ Performance Notice</h3> <p><strong>Please avoid selecting long time ranges</strong> (e.g., <em>Last 30 days</em>) unless absolutely necessary, as it may impact dashboard performance.</p> <p>Make sure to choose your <strong>Index Name</strong> to begin viewing data.</p> </div> </html> </panel> </row> <!-- rest of your dashboard --> </dashboard> Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Thank you for your reply @PrewinThomas  Unfortunately I'm still getting the same both "2025-07-20" and "none" value in 1 event.
@alvinsullivan01  If the field does not exist at the moment Splunk attempts to extract time, it might report it as "none". Can you try with below config, props.conf [test_log] SHOULD_LINEMERGE ... See more...
@alvinsullivan01  If the field does not exist at the moment Splunk attempts to extract time, it might report it as "none". Can you try with below config, props.conf [test_log] SHOULD_LINEMERGE = false KV_MODE = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true disabled = false pulldown_type = true TRUNCATE = 0 TRANSFORMS-addtimestamp = test_add_timestamp TIME_PREFIX = "timestamp":" TIME_FORMAT = %Y-%m-%dT%H:%M:%S%z MAX_TIMESTAMP_LOOKAHEAD = 50 TZ = UTC transforms.conf [test_add_timestamp] DEST_KEY = _raw REGEX = ^([^\s]+).*?({.*})$ FORMAT = {"timestamp":"$1", "data":$2}   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi Everyone, I am in the process of installing Splunk UBA and have a question regarding the storage partitioning requirements. The official documentation (link below) states that separate physical ... See more...
Hi Everyone, I am in the process of installing Splunk UBA and have a question regarding the storage partitioning requirements. The official documentation (link below) states that separate physical disks, /dev/sdb and /dev/sdc, are required for specific mount points to ensure performance. Documentation Link: https://docs.splunk.com/Documentation/UBA/5.3.0/Install/InstallSingleServer#Prepare_the_disks However, my current server is configured with a single physical disk (/dev/sda) that uses LVM to create multiple logical volumes. Here is my current lsblk output: [zake@ubaserver]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 2.7T 0 disk ├─sda1 8:1 0 1G 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 2.7T 0 part ├─rhel-root 253:0 0 80G 0 lvm / ├─rhel-swap 253:1 0 16G 0 lvm [SWAP] ├─rhel-var_vcap2 253:2 0 1T 0 lvm /var/vcap2 ├─rhel-var_vcap1 253:3 0 1T 0 lvm /var/vcap1 ├─rhel-home 253:4 0 118.8G 0 lvm /home └─rhel-backup 253:5 0 500G 0 lvm /backup sr0 11:0 1 1024M 0 rom My question is: Can my existing logical volumes, /dev/mapper/rhel-var_vcap1 and /dev/mapper/rhel-var_vcap2, be used as a substitute for the required /dev/sdb and /dev/sdc disks? I understand the requirement for separate physical disks is likely due to I/O performance. Would using this LVM setup on a single disk be a supported configuration, or is adding two new physical/virtual disks a mandatory step? Thank you for your guidance.
@neerajs_81  You can use below for avg_mins and avg_hours. | eval time_parts=split(avg_time, ", ") | eval hours=tonumber(replace(mvindex(time_parts, 0), " hour[s]?", "")) | eval minutes=tonumber(re... See more...
@neerajs_81  You can use below for avg_mins and avg_hours. | eval time_parts=split(avg_time, ", ") | eval hours=tonumber(replace(mvindex(time_parts, 0), " hour[s]?", "")) | eval minutes=tonumber(replace(mvindex(time_parts, 1), " minute[s]?", "")) | eval total_minutes=(hours * 60) + minutes | stats avg(total_minutes) as average_ttc_overall | eval avg_hours=floor(average_ttc_overall / 60) | eval avg_minutes=round(average_ttc_overall % 60) | eval hour_avg=avg_hours . " hr " . avg_minutes . " mins" | table average_ttc_overall hour_avg Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Thank you for your reply @PickleRick  Yes you are right, that is another possible solution. But if possible, the requirement I have now is to have the "timestamp" field included in the event JSON. D... See more...
Thank you for your reply @PickleRick  Yes you are right, that is another possible solution. But if possible, the requirement I have now is to have the "timestamp" field included in the event JSON. Do you have any idea why my "timestamp" field has both value "2025-07-20" and "none"?
Thank you for your reply @livehybrid  I am currently using TIME_FORMAT as you suggested, but correct me if I'm wrong, is that for _time, not "timestamp" field?
Thank you for your reply @gcusello  Here is how it looks like in raw visualization. It looks like a valid JSON format. Do you have any idea why do I have both "2025-07-20" and "none" value unde... See more...
Thank you for your reply @gcusello  Here is how it looks like in raw visualization. It looks like a valid JSON format. Do you have any idea why do I have both "2025-07-20" and "none" value under timestamp?
If I understand correctly, you want all the records from the lookup which contains NUMBER and you want to show ORIGINDATE from the sourcetable data in the index. You are right in understanding that ... See more...
If I understand correctly, you want all the records from the lookup which contains NUMBER and you want to show ORIGINDATE from the sourcetable data in the index. You are right in understanding that joins are not the right way to do things in Splunk, stats is the normal way So, I understand your sourcetable data contains both fields, so, you can do it like this index=test sourcetype="table1" | stats count by NUMBER ORIGINDATE | inputlookup append=t TempTableLookup.csv | stats values(ORIGINDATE) by NUMBER  the first stats is in case you have multiple events in your sourcetable data for number - you could replace that with  | stats latest(ORIGINDATE) as ORIGINDATE by number if that is more appropriate for your dataset. Then the inputlookup appends the lookup table to your existing sourcetable data and then the second stats will "join" the two together around NUMBER, so all the ORIGINDATE values from sourcetable will be combined with all the rows in the lookup, so you end up with ORIGINDATE for those that have values in the data, but empty ORIGINDATE values from the lookup where it was not in the data. This is often referred to in these forums as proving the negative. Hope this helps.
Hi everyone! I am new with Splunk and probably this should be really easy for many of you.  I am trying to left join a lookup with a source table. I tried this initially and it looks great but... See more...
Hi everyone! I am new with Splunk and probably this should be really easy for many of you.  I am trying to left join a lookup with a source table. I tried this initially and it looks great but it's not displaying the total number of records contained in the lookup table. I need to display all records in the lookup and to show all matching records and a blank if not found in table1. The TempTableLookup.csv (lookup table) just has 1 column called "NUMBER" with 7,500 records. The table1 has NUMBER, ORIGINDATE and other columns which are not needed. Table1 has 360,000 records. So I run this query but I get 7,479 instead of the total 7500. There's around 20+ records that do not have an ORIGINDATE or the lookup number does not exist in table1. index=test sourcetype="table1" | lookup TempTableLookup.csv NUMBER output NUMBER as matched_number | where isnotnull(matched_number) | table NUMBER ORIGINDATE So I read I need to do an left join so I tried this and it's bringing all 7,500 records I want but it is not bringing back the ORIGINDATE. Could someone please let me know what I am doing wrong on the second lookup? I know that left joins are not recommended but I cannot think of any other way to give me what I need. | inputlookup TempTableLookup.csv | join type=left NUMBER [ search index=test sourcetype="table1" | dedup NUMBER | fields NUMBER , ORIGINDATE ] | table NUMBER ORIGINDATE The output should look like: NUMBER     ORIGINDATE 123456       01/10/2025 128544       05/05/2029 and so forth... I'd appreciate greatly any ideas on how to do this. Thank you in advance and have a great day, Diana      
The first search does not have a valid value for _time, you need to parse the time value from the dummy data | makeresults format=csv data="interactionid,_time,elapsed,msgsource 1,2025-07-31,00:00.7... See more...
The first search does not have a valid value for _time, you need to parse the time value from the dummy data | makeresults format=csv data="interactionid,_time,elapsed,msgsource 1,2025-07-31,00:00.756,retrieveAPI 2,2025-07-31,00:00.556,createAPI 3,2025-07-31,00:00.156,createAPI 4,2025-07-31,00:00.256,updateAPI 5,2025-07-31,00:00.356,retrieveAPI 6,2025-07-31,00:00.156,retrieveAPI 7,2025-07-31,00:01.056,createAPI 8,2025-07-31,00:00.256,retrieveAPI 9,2025-07-31,00:06.256,updateAPI 10,2025-07-31,00:10.256,createAPI" | eval _time=strptime(_time,"%F") | rex field=elapsed "^(?<minutes>\d+):(?<seconds>\d+)\.(?<milliseconds>\d+)" | eval TimeMilliseconds = (tonumber(minutes) * 60 * 1000) + (tonumber(seconds) * 1000) + (tonumber(milliseconds)) | timechart span=1d count as AllTransactions, avg(TimeMilliseconds) as AvgDuration count(eval(TimeMilliseconds<=1000)) as "TXN_1000", count(eval(TimeMilliseconds>1000 AND TimeMilliseconds<=2000)) as "1sec-2sec" count(eval(TimeMilliseconds>2000 AND TimeMilliseconds<=5000)) as "2sec-5sec", by msgsource | untable _time msgsource count | eval group=mvindex(split(msgsource,": "),0) | eval msgsource=mvindex(split(msgsource,": "),1) | eval _time=_time.":".msgsource | xyseries _time group count | eval msgsource=mvindex(split(_time,":"),1) | eval _time=mvindex(split(_time,":"),0) | table _time msgsource AllTransactions AvgDuration TXN_1000 "1sec-2sec" "2sec-5sec"
Thanks for the response.  I was trying to see the accuracy of both the queries as I see difference in the counts, the one that you provided and the one from accepted answer. I edit the query to use ... See more...
Thanks for the response.  I was trying to see the accuracy of both the queries as I see difference in the counts, the one that you provided and the one from accepted answer. I edit the query to use some mock data. I am not able to use the mock data with the query in accepted answer, I would appreciate if you can help me fix that so that I can compare the results. | makeresults format=csv data="interactionid,_time,elapsed,msgsource 1,2025-07-31,00:00.756,retrieveAPI 2,2025-07-31,00:00.556,createAPI 3,2025-07-31,00:00.156,createAPI 4,2025-07-31,00:00.256,updateAPI 5,2025-07-31,00:00.356,retrieveAPI 6,2025-07-31,00:00.156,retrieveAPI 7,2025-07-31,00:01.056,createAPI 8,2025-07-31,00:00.256,retrieveAPI 9,2025-07-31,00:06.256,updateAPI 10,2025-07-31,00:10.256,createAPI" | rex field=elapsed "^(?<minutes>\\d+):(?<seconds>\\d+)\\.(?<milliseconds>\\d+)" | eval TimeMilliseconds = (tonumber(minutes) * 60 * 1000) + (tonumber(seconds) * 1000) + (tonumber(milliseconds)) | timechart span=1d count as AllTransactions, avg(TimeMilliseconds) as AvgDuration count(eval(TimeMilliseconds<=1000)) as "TXN_1000", count(eval(TimeMilliseconds>1000 AND TimeMilliseconds<=2000)) as "1sec-2sec" count(eval(TimeMilliseconds>2000 AND TimeMilliseconds<=5000)) as "2sec-5sec", by msgsource | untable _time msgsource count | eval group=mvindex(split(msgsource,": "),0) | eval msgsource=mvindex(split(msgsource,": "),1) | eval _time=_time.":".msgsource | xyseries _time group count | eval msgsource=mvindex(split(_time,":"),1) | eval _time=mvindex(split(_time,":"),0) | table _time msgsource AllTransactions AvgDuration TXN_1000 "1sec-2sec" "2sec-5sec" This query created the table but the counts are all 0s. And, here is the edited query  that you shared that shows the results: | makeresults format=csv data="interactionid,_time,elapsed,msgsource 1,2025-07-31,00:00.756,retrieveAPI 2,2025-07-31,00:00.556,createAPI 3,2025-07-31,00:00.156,createAPI 4,2025-07-31,00:00.256,updateAPI 5,2025-07-31,00:00.356,retrieveAPI 6,2025-07-31,00:00.156,retrieveAPI 7,2025-07-31,00:01.056,createAPI 8,2025-07-31,00:00.256,retrieveAPI 9,2025-07-31,00:06.256,updateAPI 10,2025-07-31,00:10.256,createAPI" | eval total_milliseconds = 1000 * (strptime("00:" . elapsed, "%T.%N") - relative_time(now(), "-0d@d")) | eval timebucket = case(total_milliseconds <= 1000, "TXN_1000", total_milliseconds <= 2000, "1sec-2sec", total_milliseconds <= 5000, "2sec-5sec", true(), "5sec+") | rename msgsource as API | bucket _time span=1d | eventstats avg(total_milliseconds) as AvgDur by _time API | stats count by AvgDur _time API timebucket | tojson output_field=api_time _time API AvgDur | chart values(count) over api_time by timebucket | addtotals | spath input=api_time | rename time as _time | fields - api_time You query shows the correct result but the fields are not in a order how I want to display. Any help to fix both queries would be appreciated.
Hi @splunklearner  Without your full dashboard code its going to be hard for me to make these changes blind, however if you have a look at the CSS within the code I provided there are a number of se... See more...
Hi @splunklearner  Without your full dashboard code its going to be hard for me to make these changes blind, however if you have a look at the CSS within the code I provided there are a number of settings you can update, such as font-size which is currently 15px but could be changed down to 10px for much smaller text. If this has been helpful please consider adding karma to the relevant posts. Many thanks Will
Hi @muku  How does the app convert the file, is it that the app using a monitor:// stanza within the inputs.conf and then applies props/transforms to manipulate the file, or is it done with a modula... See more...
Hi @muku  How does the app convert the file, is it that the app using a monitor:// stanza within the inputs.conf and then applies props/transforms to manipulate the file, or is it done with a modular input? Ultimately, the app might need to go on a forwarder if the data resides there or is pulled from there, and/or indexers if there are index-time extractions being applied. If there are search-time extractions applied then the app will also need to go on the searchheads. If you're able to provide more info then we will be able to give more tailored advice.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @abhi04  Are you using Aggregation Policies to trigger your alerts, or KPI Alerts? Im not sure how to achieve this with KPI Alerts, but if you are using aggregation policies then you might be ab... See more...
Hi @abhi04  Are you using Aggregation Policies to trigger your alerts, or KPI Alerts? Im not sure how to achieve this with KPI Alerts, but if you are using aggregation policies then you might be able to add some logic in here (similar to how you would apply a lookup) to do an eval based on the value.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I need to configure a certain customer app to ingest files.Those files needs an add-on which will convert them to be read by splunk, they are in ckls format.I have the add-on already and I have confi... See more...
I need to configure a certain customer app to ingest files.Those files needs an add-on which will convert them to be read by splunk, they are in ckls format.I have the add-on already and I have configured in deployments app already. How do I connect with the customer App so as it can show on dashboard?    
Performance notice text seems not aligned and want this box to be bit small (may be length wise) because I have nearly 10 Dropdowns and below that there is again text and panels. Because of this note... See more...
Performance notice text seems not aligned and want this box to be bit small (may be length wise) because I have nearly 10 Dropdowns and below that there is again text and panels. Because of this note which is bigger, panels are not visible initially. Users need to scroll down. Felt bit awkward for me