All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, use spath : https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/Spath To see why it happens, add and eval with just | eval subject2=Item.Subject ... | table ..., subject2  (subje... See more...
Hi, use spath : https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/Spath To see why it happens, add and eval with just | eval subject2=Item.Subject ... | table ..., subject2  (subject2 be null) I have a splunk index in JSON that has the key SRV and key CONTENT_LENGTH. If i do index=someindex | eval CONTENT_TYPE=if(isnull(SRV.CONTENT_TYPE),"true","false") | table SRV.CONTENT_TYPE, CONTENT_TYPE I will get the same problem as you do. But like below, i dont : index=someindex | spath output=qwe "SRV.CONTENT_TYPE" | eval CONTENT_TYPE=if(isnull(qwe),"true","false") | table SRV.CONTENT_TYPE, CONTENT_TYPE    
This resolution worked with minor changes . Many thanks for your help | chart count OVER transaction_id BY source
Hello, trying to figure out why this eval statement testing for a null value always evaluates to "true", even when the field does contain data: Here is what the data looks like in the results: ... See more...
Hello, trying to figure out why this eval statement testing for a null value always evaluates to "true", even when the field does contain data: Here is what the data looks like in the results:    
Hi Everyone,  i got error when open Splunk Security Essentials, it says    A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.   When i ... See more...
Hi Everyone,  i got error when open Splunk Security Essentials, it says    A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.   When i check it in my browser console it says   GET http://my.ipaddress:8000/en-US/splunkd/__raw/servicesNS/zake/system/storage/collections/data/RecentlyViewedKO?limit=1&query=%7B%22%24and%22%3A%5B%7B%22type%22%3A1%7D%2C%7B%22id%22%3A%22home%22%7D%2C%7B%22app%22%3A%22Splunk_Security_Essentials%22%7D%5D%7D 503 (Service Unavailable)   since i know 503 code is error from server can i know any website to check where that server down ? i mean i check in Statusgator all is okay.  Any solution ?
Hi @ITWhisperer ,   I am getting same events which has "slot" messages   events: {"priority":6,"sequence":4704,"sec":695048,"usec":639227,"msg":"hv_netvsc 54243fd-13dc-6043-bddd-13dc6045bddd e... See more...
Hi @ITWhisperer ,   I am getting same events which has "slot" messages   events: {"priority":6,"sequence":4704,"sec":695048,"usec":639227,"msg":"hv_netvsc 54243fd-13dc-6043-bddd-13dc6045bddd eth0: VF slot 1 added {"priority":6,"sequence":4698,"sec":695037,"usec":497286,"msg":"hv_netvsc 54243fd-13dc-6043-bddd-13dc6045bddd eth0: VF slot 1 removed    query used : index="index1" | search "slot" | rex field=msg "(?<action>added|removed)"| eval added_time=if(action="added",strftime(_time, "%H:%M:%S"),null())| eval removed_time=if(action="removed",strftime(_time, "%H:%M:%S"),null())| sort 0 _time| streamstats max(added_time) as added_time latest(removed_time) as removed_time by host, slot| eval added_epoch=strptime(added_time, "%H:%M:%S")| eval removed_epoch=strptime(removed_time, "%H:%M:%S")| eval downtime=if(isnotnull(added_epoch) AND isnotnull(removed_epoch), removed_epoch - added_epoch, 0)   here I tried converting time to hour:min:sec and later into epoch to get the difference in seconds  but its not working and downtime is always showing 0    
I recently had an error message pop up synchronizing from our on-prem AD servers to Entra about an account issue.  I found that the account in question had all the attributes correct except for the u... See more...
I recently had an error message pop up synchronizing from our on-prem AD servers to Entra about an account issue.  I found that the account in question had all the attributes correct except for the userPrincipalName.  In the UPN, instead of having the username@mydomain.com, it was changed to a "\"@mydomain.com.  I am trying to figure out who or which account made that change in Splunk Cloud.  I have searched for Event IDs 4738 and it shows the UPN with the "\" but it doesn't tell me who made the change.  Also I am looking in the Windows TA addon to see if I can find any more info in there.
Thank you @isoutamo, I changed the global setting to HTTPS and it works perfectly fine. I just don't understand how it works, doesn't the sender need the public key? how does it work?
Hi @onthakur , use chart command, instead stats: <your_search> | chart count OVER source BY transaction_id Ciao. Giuseppe
It's a bit long, hope i will not bore you. I made a splunk graph with two lines I need to see the values compared to the average of the last 10 days. So: One line is the percentage between a... See more...
It's a bit long, hope i will not bore you. I made a splunk graph with two lines I need to see the values compared to the average of the last 10 days. So: One line is the percentage between a time period, let's say Today 28 Jan 14:20 --> 14:25 Second line is the average percentage between the same time period but for last 10 days, 18-27 Jan 14:20 --> 14:25 What i can tell by looking at this graph is stuff like , "Today at 14:20 we had x% more/less than the last 10 day average, but at 14:21 we had x% more/less " etc. It's important to always have time snapped at the start of the minute (so if "now" is 17:31:23 then last minute is 17:30:00.000 --> 17:30:59.999) To make the search for this graph, i am using ealiest= and latest= like this: index=logs earliest=-5m@m latest =-1m@m | .... | append [search index=logs ( (earliest=24h-5m@m AND latest=-24h-1m@m) OR (earliest=-48h-5m@m AND latest=-48h-1m@) OR ... ) | ... ] | ... The search itself works ok, but my problem is when i try to make a dashboard for it. The dashboard needs to contain a time input with a token I named "thetime" Usually, you make the dashboard search use this time input by selecting "Shared Time picker (thetime)". This is not possible for my search, so i need somehow to specify $thetime.earliest$ / $thetime.latest$ in the search query. But i cannot just simply do something straight forward like:   index=logs earliest=$thetime.earliest$ latest=$thetime.latest$-24h@m | ...   Depending one what i select in the time picker, i can end up with messages like: Invalid value "now-24h" for time term 'latest' I know about | addinfo  https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Addinfo but it's impossible to use "info_max_time" in the first part of the searches,  only after the pipe addinfo. Add even if it did somehow, there would still be the issue of the required minute snap to 00 --> 59 seconds. My approach, was to use the the <init> part of the dashboard xml to calculate all the needed earliest/latest. Currently i am dealing only with relative ranges, will deal with exact dates (between) later. So in my dashboard xml i have this: <form version="1.1" theme="light"> <init> <eval token="RSTART">strftime(relative_time(now(), $thetime.earliest$),"%Y-%m-%d %H:%M:00")</eval> <eval token="REND">strftime(relative_time(now(), $thetime.latest$),"%Y-%m-%d %H:%M:00")</eval> </init> ... <query>index=logs | eval RRSTART="$RSTART$", RREND="$REND$" | table _time, RRSTART, RREND</query> ... </form>   The following part drives me crazy. Assuming now is 17:55:02. I am accessing the splunk board that has this link: https://splunk-self-hosted/en-US/app/search/DASHBOARD_NAME When i first load the page, i see the time picker and a submit button. There are no results shown until i press submit. I select "Relative" , earliest 1 Hours ago, "No snap-to", latest now, apply and submit. The browser URL changes to https://splunk-self-hosted/en-US/app/search/DASHBOARD_NAME?form.thetime.earliest=-1h&form.thetime.latest=now and the results i get RRSTART RREND 2025-01-28 17:55:00  2025-01-28 17:55:00 (same values, bad) At this point, I just click the refresh button of the browser, and i get : RRSTART RREND 2025-01-28 16:55:00  2025-01-28 17:55:00 (correct values) So basically, if i always click submit and then reload, im get the correct values From what i understand from https://docs.splunk.com/Documentation/Splunk/9.4.0/Viz/tokens#Set_tokens_on_page_load this should not happen. As for my questions : Can anyone tell me if i am doing something wrong with <init> ? Maybe it cannot be used this way with dashboard tokens ? Or maybe there is another way to do this without using <init> ? Thank you for taking the time to read. Using Splunk Enterprise Version: 9.1.0.2            
gcusello Firewalld is enabled and I have all the respective ports enabled as well. firewall-cmd --zone=public --permanent --add-port 8000/tcp firewall-cmd --reload I have worked with Splunk Suppo... See more...
gcusello Firewalld is enabled and I have all the respective ports enabled as well. firewall-cmd --zone=public --permanent --add-port 8000/tcp firewall-cmd --reload I have worked with Splunk Support and Red Hat Support and they have verified my configuration and still didn't figure it out. So the only thing it could be is a hardening configuration from CIS level 1 Thank you buddy for your polite comments.  
Team, I got stats output as below and I need to rearrange stats current output :- transaction_id  source count 12345                   ABC      1 12345                   XYZ       1 Required Ou... See more...
Team, I got stats output as below and I need to rearrange stats current output :- transaction_id  source count 12345                   ABC      1 12345                   XYZ       1 Required Output :- transaction_id   ABC    XYZ 12345                      1          1
Hi @jwestbank , a very stupid question: did you disabled iptables or firewalld on the port 8000? Ciao. Giuseppe
Thanks, worked great
Hey Splunk Community, I was wondering if anyone has figured out what is the cause for the GUI not to work at all in a new install of Splunk 9.3 or 9.4 on a [CIS Red Hat ver. 9 Level 1] image. I have... See more...
Hey Splunk Community, I was wondering if anyone has figured out what is the cause for the GUI not to work at all in a new install of Splunk 9.3 or 9.4 on a [CIS Red Hat ver. 9 Level 1] image. I have been trying to manage the the Splunk server with the GUI and it just wont come up. I can SSH all day long, but no GUI. I did come to the conclusion that its only on the [CIS Red Hat 9 level 1] image and not on an original RHEL Red Hat 9 image. This issues does not appear on [CIS Red Hat 8 level 1] image.  If anyone has the fix action to what CIS control configuration is causing this it would be greatly appreciated. I am positive if anyone in the [Gov. sector] is going to be hardening there server with CIS RHEL 9 control images they are going to run across this problem. Thanks - Johnny
Seems the answer is no from the strict sense of the rules. https://docs.splunk.com/Documentation/Splunk/9.4.0/DashStudio/chartsPie#Pie_chart_options labelDisplay ("values" | "valuesAndPercentag... See more...
Seems the answer is no from the strict sense of the rules. https://docs.splunk.com/Documentation/Splunk/9.4.0/DashStudio/chartsPie#Pie_chart_options labelDisplay ("values" | "valuesAndPercentage" | "off") values Specify whether to display the labels and/or slice percentages. The options seem to have values in any option or to turn off everything. Potential Work Around Try appending some transforms to your data set to add the Percentage and remove the values.  Then set the "labelDisplay" to only "values" which should be a percentage value.
Hi @rahulkumar , in this way, you extract the host field from host.name json field. Then you can extract other relevant for you fields. At least you remove all except message and put this field in... See more...
Hi @rahulkumar , in this way, you extract the host field from host.name json field. Then you can extract other relevant for you fields. At least you remove all except message and put this field in _raw, in this way you have the original event, before logstash ingestion. Remember that the order of execution is relevant, for this reason, in props.conf, there's a progressive number in transformation names. Ciao. Giuseppe
Sorry are you: 1) Trying to make a single Search Head Deployer serve 2 individual clusters? OR 2) Trying to move a single Search Head Deployer away from Cluster X and now server Cluster Y
Well I suppose you could add these to the manifest file but I can't stress how much I wouldn't do that.   That said I was under the impression that at some point in Splunk 8.x and forward they stop... See more...
Well I suppose you could add these to the manifest file but I can't stress how much I wouldn't do that.   That said I was under the impression that at some point in Splunk 8.x and forward they stopped allowing Splunk processes to reference or pull any Python 2.x capabilities and libraries.  Now I'm not strictly very experienced in that area so I could easily be wrong. Morrel of the story: Get off python 2.x reliance ASAP.
Actually it is a new project and creating sample dashboards for application teams. Just want to check any use cases I can get related to my fields given above...
First, one should not be receiving UDP (or TCP) directly into a Splunk instance.  Instead, use a dedicated syslog receiver (such as syslog-ng) to save the data to disk and monitor the files with a Un... See more...
First, one should not be receiving UDP (or TCP) directly into a Splunk instance.  Instead, use a dedicated syslog receiver (such as syslog-ng) to save the data to disk and monitor the files with a Universal Forwarder. If that's not feasible, try these props to better extract timestamps from the events. [my_sourcetype] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)\< TIME_PREFIX = \>\s TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%z MAX_TIMESTAMP_LOOKAHEAD = 32 EXTRACT-HostName = \b(?P\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+(-\d{2}:\d{2})?)