All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I'm trying to dynamically set some extractions to save myself time and effort from writing hundreds of extractions. In my orgs IdAM solution, we have hundreds of various user claims. ie)... See more...
Hello, I'm trying to dynamically set some extractions to save myself time and effort from writing hundreds of extractions. In my orgs IdAM solution, we have hundreds of various user claims. ie)  Data={"Claims":{"http://wso2.org/claims/user":"username","http://wso2.org/claims/role":"user_role",...etc} I would like to set up a single extraction that will extract all of these claims. My idea was the following props.conf EXTRACT-nrl_test = MatchAllClaims transforms.conf [MatchAllClaims] FORMAT = user_$1::$2 REGEX = \"http:\/\/wso2.org\/claims\/(\w+)\":\"([^\"]+) MV_ADD = true   I was hoping this would extract the field dynamically, but it did not work. is there a way to accomplish this with one extraction?   Thank you
My splunk web service is cannot recognize my source type in props.conf file when I try to add data. Here is my props.conf file's content: [Test9] TIME_PREFIX=\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\s\-\... See more...
My splunk web service is cannot recognize my source type in props.conf file when I try to add data. Here is my props.conf file's content: [Test9] TIME_PREFIX=\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\s\-\s\d{5}\s+ TIME_FORMAT = %m/%d/%Y %k:%M MAX_TIMESTAMP_LOOKAHEAD = 15 LINE_BREAKER = ([\r\n]+)\d+\s+\"\$EIT\, SHOULD_LINEMERGE = false TRUNCATE = 99999 my props.conf file path is: C:\Program Files\Splunk\etc\apps\test\local  
Help us help you by providing more information. How is the data being onboarded?  IOW, what is the method for getting the events to Splunk? Are there any errors in the logs? How have you determine... See more...
Help us help you by providing more information. How is the data being onboarded?  IOW, what is the method for getting the events to Splunk? Are there any errors in the logs? How have you determined the events are not indexed?
Dear Team, Please let me know how to setup Azure Private Link from customer Azure Virtual Network (VNet) to the Splunk cloud (onsite, not in Azure cloud). Thanks.
I am new to Splunk, so my question maybe very basic. I have built a Splunk dashboard using classic option. I have some Statistics Table and Line Chart in there. The drilldown works great if configure... See more...
I am new to Splunk, so my question maybe very basic. I have built a Splunk dashboard using classic option. I have some Statistics Table and Line Chart in there. The drilldown works great if configured as "Link to search" and Auto which opens in the same window. But I want it to open in a new window. When I try to configure for Custom, I see the following screen But it doesn't open the relevant record/log which I am clicking.    Below is the decoded url when I configure drilldown as Auto (when it works)   https://splunk.wellsfargo.net/en-US/app/wf-s-eft/search?q=search index=**** wf_id=*** source="****" <other search condition> | search Dataset="DS1" | rename ToatalProcessTime AS "Processing Time", TotalRecordsSaved AS "Record Saved", WorkFlow AS Integration &earliest=1716004800.000&latest=1716091200&sid=1716232362.2348555_113378B4-9E44-4B5A-BDBA-831A6E059142&display.page.search.mode=fast&dispatch.sample_ratio=1 I have edited the url for privacy: <other search condition>: Extended search condition Below are the search conditions injected by Splunk: search Dataset="DS1" - where DS1 is the dataset which I clicked earliest=1716004800.000&latest=1716091200 - these are the 2 values sent based on the click   How can I pass these values while configuring Custom drilldown to open in a new window.    Thanks in advance! Sid
I have a dbxquery command that queries an Oracle server that has a DATE format value stored in GMT. My SQL converts it to SQL so I can later use strptime into the _time value for timecharting: ... See more...
I have a dbxquery command that queries an Oracle server that has a DATE format value stored in GMT. My SQL converts it to SQL so I can later use strptime into the _time value for timecharting:       SELECT TO_CHAR(INTERVAL_START_TIME, 'YYYY-MM-DD-hh24-mi-ss') as Time FROM ...       Then at the end of my SPL:       ... | eval _time=strptime(TIME,"%Y-%m-%d-%H-%M-%S") | timechart span=1h sum(VALUE) by CATEGORY       On the chart that renders, we see values in GMT (which we want). My USER TIMEZONE is Central Standard, however, and not GMT. When I click (drilldown) a value $click.value$, it passes the epoch time CONVERTED TO CST. As an example, if I click the bar chart that is for 2PM today, my click-action parm is 1715972400.000 which is Friday, May 17, 2024 7:00:00 PM GMT - 5 hours ahead. I validated this by changing my user tz to GMT and it passes in the epoch time in GMT. I googled 'splunk timezone' and haven't found anything, yet, that addresses this specifically (did find this thread that is related, but no solution https://community.splunk.com/t5/Dashboards-Visualizations/Drill-down-changes-timezones/m-p/95599) So wanted to ask here! It's an issue because the drilldown also relies on dbxquery data, and so my current attack plan is to deal with the incorrect time on the drilldown (in SQL), but I can only support that if all users are in the same timezone. In conclusion, what would be nice is if I could tell Splunk to 'not change the epoch time' when clicked. I think!        
Hi folks,   This has been bugging me for a while. When I click on a custom-made correlation search in the Security Posture's Top Notable Events dashboard pane, it doesn't filter for that rule name ... See more...
Hi folks,   This has been bugging me for a while. When I click on a custom-made correlation search in the Security Posture's Top Notable Events dashboard pane, it doesn't filter for that rule name in the incident review, it just shows all of them. Where do I configure it to drill down properly?   Thanks!  
I am trying to make email templates for the "send email" alert actions. So far I have edited the "alert_actions.conf" and put that in a new app I created. But what it is doing is just overriding the ... See more...
I am trying to make email templates for the "send email" alert actions. So far I have edited the "alert_actions.conf" and put that in a new app I created. But what it is doing is just overriding the "send email" alert action and that's not what I want to do. What I want is to have multiple send email actions, Is there a way to not override the base "send email" action? What I fear is I will have to create a copy of the "sendemail.py" and make a small edit then post that in my app in the bin folder. Then rename it like "sendSREemail.py" alert_actions.conf: [email] label = SRE Email Template icon_path = mod_alert_icon_email.png from = xxxxx@xxxx.com mailserver = xxxxxx.com pdf.header_left = none pdf.header_right = none use_tls = 1 hostname = xxxxxx.com message.alert = Alert: $name$\ Why am I receiving this alert? (Give a brief description of the alert and why this alert is triggering)\ \ How do I fix it?\ 1. Step 1\ 2. Step 2\ 3. Step 3 Thanks again Splunk community.  
Hello - Curious if you ever found a solution for writing spunk results to snowflake?  Did you end up using DB Connect? Thx!
I am trying to deploy Splunk 9.2.1 in air gapped environment.    As I go through STIG list to harden the system, one of the item asks me to turn FIPS and Common Criteria mode on. Turning FIPS mode ... See more...
I am trying to deploy Splunk 9.2.1 in air gapped environment.    As I go through STIG list to harden the system, one of the item asks me to turn FIPS and Common Criteria mode on. Turning FIPS mode on is easy but Common Criteria seems to have some other requirements. I am trying to read upon Common Criteria for Splunk but not 100% clear about it and also, not sure if I need it in air gapped environment.    Has someone here gone through enabling it? Can you please provide more info on it? Specially, if not needed, I can present that to my ISSO.  Thanks in advance.  
Hello Everyone, Recently, I am trying to ingest the logs from my server. But it is not getting indexed. The log file which I am trying to ingest has different timestamp with same events. Events i... See more...
Hello Everyone, Recently, I am trying to ingest the logs from my server. But it is not getting indexed. The log file which I am trying to ingest has different timestamp with same events. Events in log file: 1712744099:{"jsonefd":"1.0","result":"1357","id":1} 1712744400:{"jsonefd":"1.0","result":"1357","id":1} 1712745680:{"jsonefd":"1.0","result":"1357","id":1} 1714518017:{"jsonefd":"1.0","result":"1378","id":1} 1715299221:{"jsonefd":"1.0","result":"1366","id":1} I tried with crcsalt but still no luck. Kindly help if anyone faced this issue before.  I would like to ingest the events even the events are same with different timestamps.
I am having the same issue installing this version. Clean or reinstall gives the same problem. To this point I have tried multiple fixes. The odd thing here is it successful installed on 4 servers an... See more...
I am having the same issue installing this version. Clean or reinstall gives the same problem. To this point I have tried multiple fixes. The odd thing here is it successful installed on 4 servers and only one will the service restart.
Those vmware-vclogs are creating lots small of buckets(folders) - this happens when the data- onboarding has is incorrect - timestamps or formatting, I would look at those logs and ensure you have ap... See more...
Those vmware-vclogs are creating lots small of buckets(folders) - this happens when the data- onboarding has is incorrect - timestamps or formatting, I would look at those logs and ensure you have applied proper data hygine with the correct TA https://docs.splunk.com/Documentation/VMW/4.0.4/Installation/CollectVMwarevCenterServerLinuxAppliancelogdata 
So this initially looks like the sender does not have certs, what is 192.168.100.1? (The client sending should now have the TLS certs - what does the outputs from client (UF ) look like? Test ... See more...
So this initially looks like the sender does not have certs, what is 192.168.100.1? (The client sending should now have the TLS certs - what does the outputs from client (UF ) look like? Test from the client openssl s_client -connect <hostname>:9997 Or  /opt/splunkforwarder/bin/splunk cmd openssl s_client -connect <hostname>:9997
Hey all,  I recently upgraded our Splunk server to 9.1.3.  I have a single UF running 8.2 which connects, however my newly deployed 9.1.3 forwarder on server 2 (Windows Server) doesn't connect.  This... See more...
Hey all,  I recently upgraded our Splunk server to 9.1.3.  I have a single UF running 8.2 which connects, however my newly deployed 9.1.3 forwarder on server 2 (Windows Server) doesn't connect.  This is net new and has never connected.  I am seeing mixed info on whether or not SSL certs need to be configured on the forwarder.  I see the UF talking to our Enterprise server on port 9997.  I am using CA signed certs on the Slunk server and default certificates on the server which uses the UF.   Can anyone point me in the right direction to get this working?  The output.conf is as follows:   [tcpout] defaultGroup=default-autolb-group [tcpout:default-autolb-group] server=<SPLUNK_IP_SERVER>:9997 useSSL=false [tcpout-server://<SPLUNK_IP_SERVER>:9997]
Hi , Sorry if any confusion on my comments, i am not asking that app should be archived. We have this app installed on our SH since long now and all of sudden app stopped working, post we raised a... See more...
Hi , Sorry if any confusion on my comments, i am not asking that app should be archived. We have this app installed on our SH since long now and all of sudden app stopped working, post we raised a case with SPlunk ,  they mentioned app got deprecated. Now i am checking if there is any alternate option to onboard the CAS(cloud app security) logs. As per your comments,If  App is still active , then why the console is  not opening?
Hi @tej57 , thank you for sharing the code for country and site. But here i have 8 hosts 4 belongs to India hosts and other 4 belongs to China. So i tried using below code for hosts in dashboard dr... See more...
Hi @tej57 , thank you for sharing the code for country and site. But here i have 8 hosts 4 belongs to India hosts and other 4 belongs to China. So i tried using below code for hosts in dashboard drop down it is showing correctly, but when i open in search under selected fields the host name is not showing which i mentioned in drop down list, showing different host which is not mentioned in the drop down. we want to show data in dashboard only with these 8 hosts <input type="dropdown" token="host"> <label>Hosts</label> <choice value="*">All</choice> <prefix>host="</prefix> <suffix>"</suffix> <default>*</default> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query> | makeresults | eval site="BDC", host="jboss.cloud.com" | fields site host | append [ | makeresults | eval site="BDC", host="ulkoy.cloud.com" | fields site host] | append [ | makeresults | eval site="BDC", host="ualki.cloud.com" | fields site host] | append [ | makeresults | eval site="BDC", host="hyjki.cloud.com" | fields site host] | append [ | makeresults | eval site="SOC", host="uiy67.cloud.com" | fields site host] | append [ | makeresults | eval site="SOC", host="7hy56.cloud.com" | fields site host] | append [ | makeresults | eval site="SOC", host="ju5e.cloud.com" | fields site host] | append [ | makeresults | eval site="SOC", host="mjut.cloud.com" | fields site host] |seach $site$ |dedup host | sort host | table host </query> </search> </input>  
   
Try replacing the last stats command with timechart. | timechart count by protocol  
Try | chart count by _time protocol