All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What message lurks beneath the yellow triangles? There are a few concerns: 1) The event timestamps may be too old to extract properly 2) MAX_TIMESTAMP_LOOKAHEAD of 15 is too short for times after ... See more...
What message lurks beneath the yellow triangles? There are a few concerns: 1) The event timestamps may be too old to extract properly 2) MAX_TIMESTAMP_LOOKAHEAD of 15 is too short for times after 9:59 3) The sourcetype name is "Test9" in props.conf, but "test9" is selected in the wizard.  Sourcetypes are case-sensitive by default.
The reason for setting up the example data in that way is based on my understanding of your description of the problem. Generally the easiest way to give advice is for you to post an example of the ... See more...
The reason for setting up the example data in that way is based on my understanding of your description of the problem. Generally the easiest way to give advice is for you to post an example of the data from both types and demonstrate what you want to achieve with the output. No you don't need to append - the whole makeresults/append section is about setting up an example data set to show how you go about joining the two. If you can post an example of the two data sources, it would be easier to show how it should be done.  
An alternative to regex is to use coalesce.  For example,   | foreach RU3NDS_* [eval RU3NDS = coalesce(RU3NDS, <<FIELD>>)]   As @gcusello mentioned, if you intend to use join command, conside... See more...
An alternative to regex is to use coalesce.  For example,   | foreach RU3NDS_* [eval RU3NDS = coalesce(RU3NDS, <<FIELD>>)]   As @gcusello mentioned, if you intend to use join command, consider stats or another method instead.  For example,   | foreach RU3NDS_* [eval RU3NDS = coalesce(RU3NDS, <<FIELD>>)] | fields - RU3NDS_* | stats values(*) as * dc(*) as dc_* by RU3NDS   Here is a complete emulation to illustrate how to correlate without using join command:   | makeresults format=csv data="RU3NDS, left_data_var foo1, leftbar1 foo2, leftbar1 foo1, leftbar2 foo3, leftbar3" | append [makeresults format=csv data="RU3NDS_abcd, right_data_var foo1, rightbar1 foo2, rightbar3 foo1, rightbar2 foo3, rightbar1"] | append [makeresults format=csv data="RU3NDS_efgh, right_data_var foo1, rightbar3 foo2, rightbar1 foo1, rightbar3 foo3, rightbar2"] ``` data emulation above ``` | foreach RU3NDS_* [eval RU3NDS = coalesce(RU3NDS, <<FIELD>>)] | fields - RU3NDS_* | stats values(*) as * dc(*) as dc_* by RU3NDS   RU3NDS dc_left_data_var dc_right_data_var left_data_var right_data_var foo1 2 3 leftbar1 leftbar2 rightbar1 rightbar2 rightbar3 foo2 1 2 leftbar1 rightbar1 rightbar3 foo3 1 2 leftbar3 rightbar1 rightbar2
After installation of Alert Manager Enterprise 3.0.6 in Splunk Cloud, the Start screen never appears and gives error  "JSON replay had no payload value"  10 times.   Q.  Anyone run into this error... See more...
After installation of Alert Manager Enterprise 3.0.6 in Splunk Cloud, the Start screen never appears and gives error  "JSON replay had no payload value"  10 times.   Q.  Anyone run into this error?  
I am a little confused by the SPL.  Did you try this? | makeresults | eval src_ip="10.0.0.0 166.226.118.0 136.226.158.0 185.46.212.0 2a03:eec0:1411::" | makemv delim=" " src_ip | mvexpand src_ip | l... See more...
I am a little confused by the SPL.  Did you try this? | makeresults | eval src_ip="10.0.0.0 166.226.118.0 136.226.158.0 185.46.212.0 2a03:eec0:1411::" | makemv delim=" " src_ip | mvexpand src_ip | lookup zscalerip.csv CIDR AS src_ip OUTPUT CIDR as CIDR_match | eval Is_managed_device=if(isnull(CIDR_match), "false", "true") | table src_ip Is_managed_device  
You can make volunteers' life easier by listing sample lookup content in table format, and construct mock/sample SQL values according to illustrated lookup table or vice versa. Anyway, there are oft... See more...
You can make volunteers' life easier by listing sample lookup content in table format, and construct mock/sample SQL values according to illustrated lookup table or vice versa. Anyway, there are often different ways to solve the same problem depending on actual data characteristics and nuances in requirements.  If I understand you correctly, you want to catalogue events into some lk_wlc_app_name based on fragments of SQL that may match lk_wlc_app_short.  You mentioned that SQL has no structure (regarding the key strings you are trying to match); your illustrated data suggest that your intended matches do not fall in "natural" word boundaries.  This makes any strategy at risk of being too aggressive as to give false positives. Because of the constraints, one very aggressive strategy is to use wildcard matches.  You need to set "Match type" of lk_wlc_app_short to WILDCARD in "Advanced Options", and your table should contain wildcards before and after the short string, like lk_wlc_app_short lk_wlc_app_name *ART* Attendance Roster Tool *Building_Mailer* Building Mailer *SCBT*  Service Center Billing Tool Once this is set up, all you need is lookup, like   | lookup lookup_weblogic_app lk_wlc_app_short as SQL   Again, this is perhaps not an optimal solution because look-backward match is expensive.
Hello, I'm trying to dynamically set some extractions to save myself time and effort from writing hundreds of extractions. In my orgs IdAM solution, we have hundreds of various user claims. ie)... See more...
Hello, I'm trying to dynamically set some extractions to save myself time and effort from writing hundreds of extractions. In my orgs IdAM solution, we have hundreds of various user claims. ie)  Data={"Claims":{"http://wso2.org/claims/user":"username","http://wso2.org/claims/role":"user_role",...etc} I would like to set up a single extraction that will extract all of these claims. My idea was the following props.conf EXTRACT-nrl_test = MatchAllClaims transforms.conf [MatchAllClaims] FORMAT = user_$1::$2 REGEX = \"http:\/\/wso2.org\/claims\/(\w+)\":\"([^\"]+) MV_ADD = true   I was hoping this would extract the field dynamically, but it did not work. is there a way to accomplish this with one extraction?   Thank you
My splunk web service is cannot recognize my source type in props.conf file when I try to add data. Here is my props.conf file's content: [Test9] TIME_PREFIX=\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\s\-\... See more...
My splunk web service is cannot recognize my source type in props.conf file when I try to add data. Here is my props.conf file's content: [Test9] TIME_PREFIX=\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\s\-\s\d{5}\s+ TIME_FORMAT = %m/%d/%Y %k:%M MAX_TIMESTAMP_LOOKAHEAD = 15 LINE_BREAKER = ([\r\n]+)\d+\s+\"\$EIT\, SHOULD_LINEMERGE = false TRUNCATE = 99999 my props.conf file path is: C:\Program Files\Splunk\etc\apps\test\local  
Help us help you by providing more information. How is the data being onboarded?  IOW, what is the method for getting the events to Splunk? Are there any errors in the logs? How have you determine... See more...
Help us help you by providing more information. How is the data being onboarded?  IOW, what is the method for getting the events to Splunk? Are there any errors in the logs? How have you determined the events are not indexed?
Dear Team, Please let me know how to setup Azure Private Link from customer Azure Virtual Network (VNet) to the Splunk cloud (onsite, not in Azure cloud). Thanks.
I am new to Splunk, so my question maybe very basic. I have built a Splunk dashboard using classic option. I have some Statistics Table and Line Chart in there. The drilldown works great if configure... See more...
I am new to Splunk, so my question maybe very basic. I have built a Splunk dashboard using classic option. I have some Statistics Table and Line Chart in there. The drilldown works great if configured as "Link to search" and Auto which opens in the same window. But I want it to open in a new window. When I try to configure for Custom, I see the following screen But it doesn't open the relevant record/log which I am clicking.    Below is the decoded url when I configure drilldown as Auto (when it works)   https://splunk.wellsfargo.net/en-US/app/wf-s-eft/search?q=search index=**** wf_id=*** source="****" <other search condition> | search Dataset="DS1" | rename ToatalProcessTime AS "Processing Time", TotalRecordsSaved AS "Record Saved", WorkFlow AS Integration &earliest=1716004800.000&latest=1716091200&sid=1716232362.2348555_113378B4-9E44-4B5A-BDBA-831A6E059142&display.page.search.mode=fast&dispatch.sample_ratio=1 I have edited the url for privacy: <other search condition>: Extended search condition Below are the search conditions injected by Splunk: search Dataset="DS1" - where DS1 is the dataset which I clicked earliest=1716004800.000&latest=1716091200 - these are the 2 values sent based on the click   How can I pass these values while configuring Custom drilldown to open in a new window.    Thanks in advance! Sid
I have a dbxquery command that queries an Oracle server that has a DATE format value stored in GMT. My SQL converts it to SQL so I can later use strptime into the _time value for timecharting: ... See more...
I have a dbxquery command that queries an Oracle server that has a DATE format value stored in GMT. My SQL converts it to SQL so I can later use strptime into the _time value for timecharting:       SELECT TO_CHAR(INTERVAL_START_TIME, 'YYYY-MM-DD-hh24-mi-ss') as Time FROM ...       Then at the end of my SPL:       ... | eval _time=strptime(TIME,"%Y-%m-%d-%H-%M-%S") | timechart span=1h sum(VALUE) by CATEGORY       On the chart that renders, we see values in GMT (which we want). My USER TIMEZONE is Central Standard, however, and not GMT. When I click (drilldown) a value $click.value$, it passes the epoch time CONVERTED TO CST. As an example, if I click the bar chart that is for 2PM today, my click-action parm is 1715972400.000 which is Friday, May 17, 2024 7:00:00 PM GMT - 5 hours ahead. I validated this by changing my user tz to GMT and it passes in the epoch time in GMT. I googled 'splunk timezone' and haven't found anything, yet, that addresses this specifically (did find this thread that is related, but no solution https://community.splunk.com/t5/Dashboards-Visualizations/Drill-down-changes-timezones/m-p/95599) So wanted to ask here! It's an issue because the drilldown also relies on dbxquery data, and so my current attack plan is to deal with the incorrect time on the drilldown (in SQL), but I can only support that if all users are in the same timezone. In conclusion, what would be nice is if I could tell Splunk to 'not change the epoch time' when clicked. I think!        
Hi folks,   This has been bugging me for a while. When I click on a custom-made correlation search in the Security Posture's Top Notable Events dashboard pane, it doesn't filter for that rule name ... See more...
Hi folks,   This has been bugging me for a while. When I click on a custom-made correlation search in the Security Posture's Top Notable Events dashboard pane, it doesn't filter for that rule name in the incident review, it just shows all of them. Where do I configure it to drill down properly?   Thanks!  
I am trying to make email templates for the "send email" alert actions. So far I have edited the "alert_actions.conf" and put that in a new app I created. But what it is doing is just overriding the ... See more...
I am trying to make email templates for the "send email" alert actions. So far I have edited the "alert_actions.conf" and put that in a new app I created. But what it is doing is just overriding the "send email" alert action and that's not what I want to do. What I want is to have multiple send email actions, Is there a way to not override the base "send email" action? What I fear is I will have to create a copy of the "sendemail.py" and make a small edit then post that in my app in the bin folder. Then rename it like "sendSREemail.py" alert_actions.conf: [email] label = SRE Email Template icon_path = mod_alert_icon_email.png from = xxxxx@xxxx.com mailserver = xxxxxx.com pdf.header_left = none pdf.header_right = none use_tls = 1 hostname = xxxxxx.com message.alert = Alert: $name$\ Why am I receiving this alert? (Give a brief description of the alert and why this alert is triggering)\ \ How do I fix it?\ 1. Step 1\ 2. Step 2\ 3. Step 3 Thanks again Splunk community.  
Hello - Curious if you ever found a solution for writing spunk results to snowflake?  Did you end up using DB Connect? Thx!
I am trying to deploy Splunk 9.2.1 in air gapped environment.    As I go through STIG list to harden the system, one of the item asks me to turn FIPS and Common Criteria mode on. Turning FIPS mode ... See more...
I am trying to deploy Splunk 9.2.1 in air gapped environment.    As I go through STIG list to harden the system, one of the item asks me to turn FIPS and Common Criteria mode on. Turning FIPS mode on is easy but Common Criteria seems to have some other requirements. I am trying to read upon Common Criteria for Splunk but not 100% clear about it and also, not sure if I need it in air gapped environment.    Has someone here gone through enabling it? Can you please provide more info on it? Specially, if not needed, I can present that to my ISSO.  Thanks in advance.  
Hello Everyone, Recently, I am trying to ingest the logs from my server. But it is not getting indexed. The log file which I am trying to ingest has different timestamp with same events. Events i... See more...
Hello Everyone, Recently, I am trying to ingest the logs from my server. But it is not getting indexed. The log file which I am trying to ingest has different timestamp with same events. Events in log file: 1712744099:{"jsonefd":"1.0","result":"1357","id":1} 1712744400:{"jsonefd":"1.0","result":"1357","id":1} 1712745680:{"jsonefd":"1.0","result":"1357","id":1} 1714518017:{"jsonefd":"1.0","result":"1378","id":1} 1715299221:{"jsonefd":"1.0","result":"1366","id":1} I tried with crcsalt but still no luck. Kindly help if anyone faced this issue before.  I would like to ingest the events even the events are same with different timestamps.
I am having the same issue installing this version. Clean or reinstall gives the same problem. To this point I have tried multiple fixes. The odd thing here is it successful installed on 4 servers an... See more...
I am having the same issue installing this version. Clean or reinstall gives the same problem. To this point I have tried multiple fixes. The odd thing here is it successful installed on 4 servers and only one will the service restart.
Those vmware-vclogs are creating lots small of buckets(folders) - this happens when the data- onboarding has is incorrect - timestamps or formatting, I would look at those logs and ensure you have ap... See more...
Those vmware-vclogs are creating lots small of buckets(folders) - this happens when the data- onboarding has is incorrect - timestamps or formatting, I would look at those logs and ensure you have applied proper data hygine with the correct TA https://docs.splunk.com/Documentation/VMW/4.0.4/Installation/CollectVMwarevCenterServerLinuxAppliancelogdata 
So this initially looks like the sender does not have certs, what is 192.168.100.1? (The client sending should now have the TLS certs - what does the outputs from client (UF ) look like? Test ... See more...
So this initially looks like the sender does not have certs, what is 192.168.100.1? (The client sending should now have the TLS certs - what does the outputs from client (UF ) look like? Test from the client openssl s_client -connect <hostname>:9997 Or  /opt/splunkforwarder/bin/splunk cmd openssl s_client -connect <hostname>:9997