All Topics

Top

All Topics

I am not seeing option to make my dashboard public or shared please guide 
Hi, how can I Forwarding events From Splunk Enterprise TO SOAR automatically ?
Hi All, When we doing a splunk search in our application (sh_app1), we noticed some fields are duplicated / double up (refer: sample_logs.png) if we do the same search in another application (sh_we... See more...
Hi All, When we doing a splunk search in our application (sh_app1), we noticed some fields are duplicated / double up (refer: sample_logs.png) if we do the same search in another application (sh_welcome_app_ui), we do not see any duplication for the same fields. cid Perf-May06-9-151xxx level INFO node_name aks-application-xxx   SPL being used. index=splunk_idx source= some_source | rex field=log "level=(?<level>.*?)," | rex field=log "\[CID:(?<cid>.*?)\]" | rex field=log "message=(?<msg>.*?)," | rex field=log "elapsed_time_ms=\"(?<elap>.*?)\"" | search msg="\"search pattern\"" | table cid, msg, elap The event count remains same if we search inside that app or any other app, only some fields are  duplicated. We couldn't figure out where the actual issue is.  Can someone help? 
Risky Signin Analytic Dashboard When & How to use this dashboard Monitoring:Selecting the risk value and leave the user as * for global monitoring Threat Hunting:Expanding the time range to fin... See more...
Risky Signin Analytic Dashboard When & How to use this dashboard Monitoring:Selecting the risk value and leave the user as * for global monitoring Threat Hunting:Expanding the time range to find out the high suspicious activity account then drilldown User sign-in activity tracking:Drilldown an user sign-in activities by inputting a specify account and select the corresponding risk value from “Multiselect-Risk Value”   Panel Descriptions Above panel use the fixed "high” risky sign-in condition to present the daily high suspicious sign-in activity account count The increase/decrease percentage is compared with yesterday count Above panel use the fixed “Time Range = Last 30 days” condition to present a trend view of the daily high suspicious sign-in activity account in 1 month Above panel use the Global Time Range and Multiselect-Risk Value conditions to present top 10 AAD failure/success accounts Above panel providing a drilldown aggregation events by selecting the top 10 failure/success account you might interested in Above panel use the Global Time Range, Multiselect-Risk Value conditions to present every account which has both failure and success sign-in events correlation results That account has higher count might presents the possibility of under attack This panel does not apply a top N limitation in condition, so you might observe the total accounts are larger than the “Top 10 panel” Azure AD Audit Event panel provides a drilldown aggregation AAD audit events by clicking the “Suspicious Signin Activity Account” User Identity Information panel provides the user identity information by clicking the “Suspicious Signin Activity Account” and present the lookup results in table Above panel provides a drilldown aggregation event view to present the signin activities of the selected account, and sorting by time column to let analyst figure out the impossible travel record more clearly This sample presents that account had a historical under high possibility attack series Above panel presents the selected account risky sign-in activities source IP and geographic map Drilldown-By Clicking the SourceIP value to propagate the IP address to enrich CTI lookup Above 3 panels provide normal sign-in properties view of the selected account to let cybersecurity analyst compared with the abnormal drilldown view more clearly Above panel use fixed “Time Range = Last 30 days” condition to present all the failure/success sign-in events in geographic map view to let analyst compared with the abnormal source location more clearly Above panels provide how many signatures detected of the selected user and present as a pie chart and provide a drilldown aggregation event view to present those alert events If “Suspicious Signin Activity Account” presents nothing, that means user account might under some of attacks such as password spray and brute force attack but not successful, because no event matching the strict conditions(must have both failure and success events and within the specified risk value). Notes If “Suspicious Signin Activity Account” panel presents nothing, this may means user account might under some of attacks such as password spray and brute force attack but not successful, because no event matching the strict conditions(must have both failure and success events and within the specified risk value). Not all accounts were presented in “Suspicious Signin Activity Account” panel were determined as True-Positive, but a higher count might be an indicator, and need cybersecurity analyst to have a look.
has anyone successfully using Splunk API call /services/saved/searches/SEARCH_NAME(https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#saved.2Fsearches.2F.7Bname.7D) to add a webhoo... See more...
has anyone successfully using Splunk API call /services/saved/searches/SEARCH_NAME(https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#saved.2Fsearches.2F.7Bname.7D) to add a webhook for an existing Splunk report? I added action.webhook=1 , action.webhook.param.url=https://1234.com , and actions=pagerduty,webhook successfully through API but the Splunk UI does not show the webhook on UI (please see screenshot). Anyone has any idea what seem to be the problem?     curl \ --data-urlencode 'action.webhook.param.url=https://1234.com' \ --data-urlencode 'action.webhook=1' \ --data-urlencode 'actions=pagerduty,webhook' \ --data-urlencode 'output_mode=json' \ --header "Authorization: Splunk A_TOKEN_HERE" \ --insecure \ --request 'POST' \ --retry '12' \ --retry-delay '5' \ --silent \ "https://localhost:8089/services/saved/searches/test-12345"      
This is my Dashboard using Dashboard Studio. The information used for this dashboard can be found on the Kaggle website along with the CDC.org. The data has been collected from 2022 so it is a few ye... See more...
This is my Dashboard using Dashboard Studio. The information used for this dashboard can be found on the Kaggle website along with the CDC.org. The data has been collected from 2022 so it is a few years behind.  I used a wide variety of colors and different labels to bring attention to any reader.    #DashboardChallenge #SplunkCommunity #DashboardStudio
After installation of Alert Manager Enterprise 3.0.6 in Splunk Cloud, the Start screen never appears and gives error  "JSON replay had no payload value"  10 times.   Q.  Anyone run into this error... See more...
After installation of Alert Manager Enterprise 3.0.6 in Splunk Cloud, the Start screen never appears and gives error  "JSON replay had no payload value"  10 times.   Q.  Anyone run into this error?  
Hello, I'm trying to dynamically set some extractions to save myself time and effort from writing hundreds of extractions. In my orgs IdAM solution, we have hundreds of various user claims. ie)... See more...
Hello, I'm trying to dynamically set some extractions to save myself time and effort from writing hundreds of extractions. In my orgs IdAM solution, we have hundreds of various user claims. ie)  Data={"Claims":{"http://wso2.org/claims/user":"username","http://wso2.org/claims/role":"user_role",...etc} I would like to set up a single extraction that will extract all of these claims. My idea was the following props.conf EXTRACT-nrl_test = MatchAllClaims transforms.conf [MatchAllClaims] FORMAT = user_$1::$2 REGEX = \"http:\/\/wso2.org\/claims\/(\w+)\":\"([^\"]+) MV_ADD = true   I was hoping this would extract the field dynamically, but it did not work. is there a way to accomplish this with one extraction?   Thank you
My splunk web service is cannot recognize my source type in props.conf file when I try to add data. Here is my props.conf file's content: [Test9] TIME_PREFIX=\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\s\-\... See more...
My splunk web service is cannot recognize my source type in props.conf file when I try to add data. Here is my props.conf file's content: [Test9] TIME_PREFIX=\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\s\-\s\d{5}\s+ TIME_FORMAT = %m/%d/%Y %k:%M MAX_TIMESTAMP_LOOKAHEAD = 15 LINE_BREAKER = ([\r\n]+)\d+\s+\"\$EIT\, SHOULD_LINEMERGE = false TRUNCATE = 99999 my props.conf file path is: C:\Program Files\Splunk\etc\apps\test\local  
Dear Team, Please let me know how to setup Azure Private Link from customer Azure Virtual Network (VNet) to the Splunk cloud (onsite, not in Azure cloud). Thanks.
I am new to Splunk, so my question maybe very basic. I have built a Splunk dashboard using classic option. I have some Statistics Table and Line Chart in there. The drilldown works great if configure... See more...
I am new to Splunk, so my question maybe very basic. I have built a Splunk dashboard using classic option. I have some Statistics Table and Line Chart in there. The drilldown works great if configured as "Link to search" and Auto which opens in the same window. But I want it to open in a new window. When I try to configure for Custom, I see the following screen But it doesn't open the relevant record/log which I am clicking.    Below is the decoded url when I configure drilldown as Auto (when it works)   https://splunk.wellsfargo.net/en-US/app/wf-s-eft/search?q=search index=**** wf_id=*** source="****" <other search condition> | search Dataset="DS1" | rename ToatalProcessTime AS "Processing Time", TotalRecordsSaved AS "Record Saved", WorkFlow AS Integration &earliest=1716004800.000&latest=1716091200&sid=1716232362.2348555_113378B4-9E44-4B5A-BDBA-831A6E059142&display.page.search.mode=fast&dispatch.sample_ratio=1 I have edited the url for privacy: <other search condition>: Extended search condition Below are the search conditions injected by Splunk: search Dataset="DS1" - where DS1 is the dataset which I clicked earliest=1716004800.000&latest=1716091200 - these are the 2 values sent based on the click   How can I pass these values while configuring Custom drilldown to open in a new window.    Thanks in advance! Sid
I have a dbxquery command that queries an Oracle server that has a DATE format value stored in GMT. My SQL converts it to SQL so I can later use strptime into the _time value for timecharting: ... See more...
I have a dbxquery command that queries an Oracle server that has a DATE format value stored in GMT. My SQL converts it to SQL so I can later use strptime into the _time value for timecharting:       SELECT TO_CHAR(INTERVAL_START_TIME, 'YYYY-MM-DD-hh24-mi-ss') as Time FROM ...       Then at the end of my SPL:       ... | eval _time=strptime(TIME,"%Y-%m-%d-%H-%M-%S") | timechart span=1h sum(VALUE) by CATEGORY       On the chart that renders, we see values in GMT (which we want). My USER TIMEZONE is Central Standard, however, and not GMT. When I click (drilldown) a value $click.value$, it passes the epoch time CONVERTED TO CST. As an example, if I click the bar chart that is for 2PM today, my click-action parm is 1715972400.000 which is Friday, May 17, 2024 7:00:00 PM GMT - 5 hours ahead. I validated this by changing my user tz to GMT and it passes in the epoch time in GMT. I googled 'splunk timezone' and haven't found anything, yet, that addresses this specifically (did find this thread that is related, but no solution https://community.splunk.com/t5/Dashboards-Visualizations/Drill-down-changes-timezones/m-p/95599) So wanted to ask here! It's an issue because the drilldown also relies on dbxquery data, and so my current attack plan is to deal with the incorrect time on the drilldown (in SQL), but I can only support that if all users are in the same timezone. In conclusion, what would be nice is if I could tell Splunk to 'not change the epoch time' when clicked. I think!        
Hi folks,   This has been bugging me for a while. When I click on a custom-made correlation search in the Security Posture's Top Notable Events dashboard pane, it doesn't filter for that rule name ... See more...
Hi folks,   This has been bugging me for a while. When I click on a custom-made correlation search in the Security Posture's Top Notable Events dashboard pane, it doesn't filter for that rule name in the incident review, it just shows all of them. Where do I configure it to drill down properly?   Thanks!  
I am trying to make email templates for the "send email" alert actions. So far I have edited the "alert_actions.conf" and put that in a new app I created. But what it is doing is just overriding the ... See more...
I am trying to make email templates for the "send email" alert actions. So far I have edited the "alert_actions.conf" and put that in a new app I created. But what it is doing is just overriding the "send email" alert action and that's not what I want to do. What I want is to have multiple send email actions, Is there a way to not override the base "send email" action? What I fear is I will have to create a copy of the "sendemail.py" and make a small edit then post that in my app in the bin folder. Then rename it like "sendSREemail.py" alert_actions.conf: [email] label = SRE Email Template icon_path = mod_alert_icon_email.png from = xxxxx@xxxx.com mailserver = xxxxxx.com pdf.header_left = none pdf.header_right = none use_tls = 1 hostname = xxxxxx.com message.alert = Alert: $name$\ Why am I receiving this alert? (Give a brief description of the alert and why this alert is triggering)\ \ How do I fix it?\ 1. Step 1\ 2. Step 2\ 3. Step 3 Thanks again Splunk community.  
I am trying to deploy Splunk 9.2.1 in air gapped environment.    As I go through STIG list to harden the system, one of the item asks me to turn FIPS and Common Criteria mode on. Turning FIPS mode ... See more...
I am trying to deploy Splunk 9.2.1 in air gapped environment.    As I go through STIG list to harden the system, one of the item asks me to turn FIPS and Common Criteria mode on. Turning FIPS mode on is easy but Common Criteria seems to have some other requirements. I am trying to read upon Common Criteria for Splunk but not 100% clear about it and also, not sure if I need it in air gapped environment.    Has someone here gone through enabling it? Can you please provide more info on it? Specially, if not needed, I can present that to my ISSO.  Thanks in advance.  
Hello Everyone, Recently, I am trying to ingest the logs from my server. But it is not getting indexed. The log file which I am trying to ingest has different timestamp with same events. Events i... See more...
Hello Everyone, Recently, I am trying to ingest the logs from my server. But it is not getting indexed. The log file which I am trying to ingest has different timestamp with same events. Events in log file: 1712744099:{"jsonefd":"1.0","result":"1357","id":1} 1712744400:{"jsonefd":"1.0","result":"1357","id":1} 1712745680:{"jsonefd":"1.0","result":"1357","id":1} 1714518017:{"jsonefd":"1.0","result":"1378","id":1} 1715299221:{"jsonefd":"1.0","result":"1366","id":1} I tried with crcsalt but still no luck. Kindly help if anyone faced this issue before.  I would like to ingest the events even the events are same with different timestamps.
Hey all,  I recently upgraded our Splunk server to 9.1.3.  I have a single UF running 8.2 which connects, however my newly deployed 9.1.3 forwarder on server 2 (Windows Server) doesn't connect.  This... See more...
Hey all,  I recently upgraded our Splunk server to 9.1.3.  I have a single UF running 8.2 which connects, however my newly deployed 9.1.3 forwarder on server 2 (Windows Server) doesn't connect.  This is net new and has never connected.  I am seeing mixed info on whether or not SSL certs need to be configured on the forwarder.  I see the UF talking to our Enterprise server on port 9997.  I am using CA signed certs on the Slunk server and default certificates on the server which uses the UF.   Can anyone point me in the right direction to get this working?  The output.conf is as follows:   [tcpout] defaultGroup=default-autolb-group [tcpout:default-autolb-group] server=<SPLUNK_IP_SERVER>:9997 useSSL=false [tcpout-server://<SPLUNK_IP_SERVER>:9997]
trying to get 2 different lines one for HDX and the other for RDP, can anyone help please?    
Hi Team, I have a active Servcenow ticket and email notification integration setup already for splunk alerts.  I am trying to add tokens which show me query result in serviceNow ticket descriptio... See more...
Hi Team, I have a active Servcenow ticket and email notification integration setup already for splunk alerts.  I am trying to add tokens which show me query result in serviceNow ticket description as same as we are getting in email notification when we check  Inline Table fields. can you help me to add same in serviceNow ticket as well. so that I can get query result in ticket as well. right now its showing me only title of the alerts. due to which I need to go to splunk every time when alert trigger  and need to run alerts search to validate alerts manually.      
HI everyone, I need to check my logs to see if a user has MFA enabled or not. I've already configured Microsoft Azure App for Splunk, as all the other data is coming through. Additionally, I can see... See more...
HI everyone, I need to check my logs to see if a user has MFA enabled or not. I've already configured Microsoft Azure App for Splunk, as all the other data is coming through. Additionally, I can see 'azure:monitor:aad' logs. Can someone help me understand what changes need to be made on the Azure side to be able to view these logs? Thank you in advance.