All Topics

Top

All Topics

In the process of Splunk Integration with lastPass , we are getting an error like  "Your SIEM refused to connect"   Please 
Hi I have the use case that i need to find some direct links between different events of the same index and sourcetype. The result should show me three different bars: bar 1: count of the existing... See more...
Hi I have the use case that i need to find some direct links between different events of the same index and sourcetype. The result should show me three different bars: bar 1: count of the existing links (incl. filter criteria matching) bar 2: count of the existing links where filter criteria dont match bar 3: count of the events where there is no existing link at all I came so far to make use of the "left join" to not loose the "not matching" events but now I dont know how to differiance them into a bar diagram or with an if condition to count them. It needs to be counted weekly. Can you help me please? This is my current query state: index=A | rename Name as TargetName | join type=left max=0 TargetName    [ search index=A    | fields TargetName ID Status] | join type=left SourceID    [ search index=A    | fields SourceID, type] | join type=left TargetID    [ search index=A    | fields TargetID] | bin span=1w@w0 _time | eval state=if(match(status,"Done") OR match(status,"Pending"), "Link + State is there", if (NOT match(status,"Done") OR NOT match(status,"Pending"), "State is missing", "No Link")) | dedup ID _time sortby -state | timechart span=1w@w0 count by state Somehow I can not make it work to get all "non matching" aka. the "No Link" events. Is the "if" the right way to get what I need? Do i need to add another "eval" within each join? And if yes, how to do that? Thank you for every help! This should be my result (see screenshot).
Hello All, I have a requirement on the dropdowns, I have a following lookup file which contains application, environment and index details, I need to get the environment details related to each appl... See more...
Hello All, I have a requirement on the dropdowns, I have a following lookup file which contains application, environment and index details, I need to get the environment details related to each application when i choose app details from the dropdown, similarly with the index dropdown, it must only give the index details based on the values that i choose in the application and environment dropdowns. I could get the desired results while using the lookup file. But how can this be achieved using eval condition in the splunk dashboard rather than using the lookup file. I have the values of the fields in the splunk results. application environment index app_a DEV aws-app_a_npd app_a PPR aws-app_a_ppr app_a TEST aws-app_a_test app_a SUP aws-app_a_sup app_a PROD aws-app_a_prod app_b NPD aws-app_b_npd app_b SUP aws-app_b_sup app_b PROD aws-app_b_prod app_c NPD aws-app_c_npd app_c SUP aws-app_c_sup app_c PROD aws-app_c_prod
Hi peeps, I receive below error while running a query. below is my query; eventtype=sfdc-login-history | iplocation allfields=true SourceIp | eval cur_t=_time | streamstats current=t win... See more...
Hi peeps, I receive below error while running a query. below is my query; eventtype=sfdc-login-history | iplocation allfields=true SourceIp | eval cur_t=_time | streamstats current=t window=2 first(lat) as prev_lat first(lon) as prev_lon first(cur_t) as prev_t by Username | eval time_diff=cur_t - prev_t | distance outputField=distance inputFieldlat1=lat inputFieldLat2=prev_lat inputfieldLon1=lon inputFieldLon2=prev_lon | eval time_diff=-1*time_diff | eval ratio = distances3600/time_diff | where ratio> 500  | geostats latfield=lat longfield=lon count by Application  
Hello Everyone, I am just bringing up splunk within our environment, so a lot of functions are still new. I am trying to use my windows event data to update users ID on panorama, however, running t... See more...
Hello Everyone, I am just bringing up splunk within our environment, so a lot of functions are still new. I am trying to use my windows event data to update users ID on panorama, however, running the below query in my es environment returns the error : External search command 'panuserupdate' returned error code 2. Script output = "ERROR Unable to get apikey from firewall: local variable 'username' referenced before assignment ". The Query index=wineventlog host=xxxxxx | mvexpand Security_ID | mvexpand Source_Network_Address | dedup Security_ID Source_Network_Address | search Security_ID!="NULL SID" | rename Security_ID as user | rename Source_Network_Address as src_ip | panuserupdate panorama=x.x.x.x serial=000000000000 | fields user src_ip Brief overview of my data ingestion: Panorama syslog is ingested to splunk cloud through Heavy forwarder. Palo Alto Add on for Splunk is installed on both HF and Splunk Cloud also, but no data is showing on the app. Every data is 0 0.  Also I do have a user account in Panorama with api permissions.
    Hear from Morgan McLean, director of product management and one of the co- founders of OpenTelemetry. You'll learn about OpenTelemetry's new logging functionality, including its two... See more...
    Hear from Morgan McLean, director of product management and one of the co- founders of OpenTelemetry. You'll learn about OpenTelemetry's new logging functionality, including its two logging paths, the benefits of each, and real-world production examples. We'll show the power of the next wave of OpenTelemetry enhancements, including profiling and the insights that this unlocks in combination with distributed traces, and how we're extending your observability to client applications. We'll wrap up with a Q&A and answer your most pressing questions. Related Tech Talk | Starting With Observability: OpenTelemetry Best Practices Watch the replay
Hello Team, Pre staging environment (not production), a single server with 12 CPU + 24 GB or memory + raid0 nvme (2.5GB/s write, 5GB/s read). All in one deployment (SH + indexer). CPU cores with HT ... See more...
Hello Team, Pre staging environment (not production), a single server with 12 CPU + 24 GB or memory + raid0 nvme (2.5GB/s write, 5GB/s read). All in one deployment (SH + indexer). CPU cores with HT on dedicated server (6 cores with HT = 12 CPU -> but not used by any other VM). Splunk 9.1.1 and ES 7.1.1. Fresh install. NO data ingested (0 events in most of the indexes including main, notable, risk etc...) - so basically no data yet to be processed. Default ES configuration, i have not yet tuned any correlation searches etc. Defaults. And already performance problems: 1. MC Scheduler Activity Instance showing 22% skipped. 2. ESX reporting minimal CPU usage (the same with memory): 3. MC showing more details, many different Accelerated DM tasks are skipped, all the time: Questions: 1. obviously the first recommendation would be to disable many of correlation searches/accelerated DMs, but that not what i would like do because the aim is to test complete ES functionality (by generating a small number of different types of events). Why do i have those problems in a first place ? I can see that all the tasks are very short, finishes in 1 second, just few takes several seconds. And that is expected since i have 0 events everywhere and i do always expect to have a small number of events on this test deployment. What should i do to tune it and make sure there are no problems with skipped jobs ? Shall i increase  max_searches_per_cpu base_max_searches  Any other ideas ? Overall that seems weird, 
Hello Team, I have my training Splunk instance with ES 7.1. I have used it for training and experiments. Question: is there any "training/example" data i could use to import it there ? The point is... See more...
Hello Team, I have my training Splunk instance with ES 7.1. I have used it for training and experiments. Question: is there any "training/example" data i could use to import it there ? The point is: i do not want to configure dozens of TA's/collectors, create real labs with real cybersecurity attacks, instead would love to have a test data so i could learn/experiment/review Splunk ES capabilities. Anything ?
To rebalance data, splunk only offers you to rebalance one single index or all the indexes but there is no option to provide a list of them. I've created the following shell script to do this and I h... See more...
To rebalance data, splunk only offers you to rebalance one single index or all the indexes but there is no option to provide a list of them. I've created the following shell script to do this and I hope it will help the community. To use it: Create an empty directory and copy the script there Change the parameters at the beginning of the script to correspond to your platform Create a file named ~/.creds containing MY_CREDS="admin:password" (you can replace admin and password with any user account with admin privileges) Create a file in the same directory as the script named indexes.conf with a list of indexes you want to rebalance (one per line) Launch the script and provide a value for the timeout in minutes ("-m" switch;  e.g. "./rebalance.sh -m 2880") Tip: if you access your manager node using ssh, consider using "screen" to keep your session open for a long period of time.   #!/bin/sh HOST=my_manager_node.example.com PORT=8089 source $HOME/.creds CURL_OPTS="-su ${MY_CREDS} --write-out HTTPSTATUS:%{http_code}" INDEXES_FILE="indexes.conf" CONFIG_CMD="/usr/bin/curl ${CURL_OPTS} https://${HOST}:${PORT}/services/cluster/config/config -d rebalance_threshold=0.9" /cluster/manager/control/control/rebalance_buckets -d action=start" START_CMD="/usr/bin/curl ${CURL_OPTS} https://${HOST}:${PORT}/services/cluster/manager/control/control/rebalance_buckets -d action=start" configuration endpoint or by normal endpoint with action=start MAXRUNTIME_OPT="-d max_time_in_min=" INDEX_OPT="-d index=" STOP_CMD="/usr/bin/curl ${CURL_OPTS} https://${HOST}:${PORT}/services/cluster/manager/control/control/rebalance_buckets -d action=stop" STATUS_CMD="/usr/bin/curl ${CURL_OPTS} https://${HOST}:${PORT}/services/cluster/manager/control/control/rebalance_buckets -d action=status" LOG="rebalance.log" MAXRUNTIME="1" while getopts ":m:" opt; do case $opt in m) MAXRUNTIME=${OPTARG} ;; echo "Missing value for argument -$OPTARG" exit 1 ;; ?) echo "Unknown option -$OPTARG" exit 1 ;; esac done if [[ "$OPTIND" -ne "3" ]]; then echo "Please specify timeout in minute with -m" exit fi echo -n "Starting $0: " >>$LOG echo $(date) >>$LOG [[ -f $INDEXES_FILE ]] || exit echo "Configuring rebalancing" echo "Configuring rebalancing" >>$LOG EPOCH=$(date +"%s") MAX_EPOCH=$(( EPOCH + ( 60 * MAXRUNTIME ) )) echo "Will end at EPOCH: $MAX_EPOCH" >>$LOG HTTP_RESPONSE=$($CONFIG_CMD) HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g') HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://') if [[ ! $HTTP_STATUS == "200" ]]; then echo "HTTP status: $HTTP_STATUS" echo "HTTP body: $HTTP_BODY" exit fi RUNNING=0 for INDEX in `cat $INDEXES_FILE`; do EPOCH=$(date +"%s") MINS_REMAINING=$(( ( MAX_EPOCH - EPOCH ) / 60 )) if [[ "$MINS_REMAINING" -le "0" ]]; then echo "Timout reached" echo "Timout reached" >>$LOG exit fi echo "Rebalancing $INDEX" echo "Rebalancing $INDEX" >>$LOG echo "Remaining time: $MINS_REMAINING" >>$LOG echo $(date) >>$LOG #HTTP_RESPONSE=$(${START_CMD} ${INDEX_OPT}${INDEX}) HTTP_RESPONSE=$(${START_CMD} ${INDEX_OPT}${INDEX} ${MAXRUNTIME_OPT}${MINS_REMAINING}) #HTTP_RESPONSE=200 HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g') HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://') echo "HTTP status: $HTTP_STATUS" >>$LOG echo "HTTP body: $HTTP_BODY" >>$LOG if [[ ! $HTTP_STATUS == "200" ]]; then echo "HTTP status: $HTTP_STATUS" echo "HTTP body: $HTTP_BODY" exit fi RUNNING=1 WAS_RUNNING=0 sleep 1 while [[ $RUNNING == 1 ]]; do HTTP_RESPONSE=$($STATUS_CMD) HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g') HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://') if [[ ! $HTTP_STATUS == "200" ]]; then echo "HTTP status: $HTTP_STATUS" echo "HTTP body: $HTTP_BODY" exit fi echo "$HTTP_BODY" | grep "Data rebalance is not running" >/dev/null if [ $? -eq 0 ]; then RUNNING=0 else WAS_RUNNING=1 RUNNING=1 echo -n "." sleep 1 fi done if [[ $WAS_RUNNING == 1 ]]; then # We need a CR after the last echo -n "." echo fi done  
Hi Guys, need answers for below information which is relate to only Splunk application   Path/Query/log file/Index 1. Authentication       Conditional control not met       Disabled/... See more...
Hi Guys, need answers for below information which is relate to only Splunk application   Path/Query/log file/Index 1. Authentication       Conditional control not met       Disabled/ Locked account       Expired token/certificate       Failed logins       Invalid Credentials       MFA check failed       Successful logins       When someone is elevating their permissions and accessing mail of         another user   2. Authorization        Failed authorization       Resource does not exist       Successful authorization       User not authorized   3. Change       Add Integrated Application / Service Configuration       Change to authentication scheme       Change to IP allow list       Change to MFA requirements       Changes to Authentication or Authorization methods       Remove Integrated Application / Service Configuration   4. User Management       Add Group to Group       Add User or Group       Create certificate       Create Role       Create token       Create User       Delete Role       Delete User       Diable token       Elevate User to priviledged status       Remove Group to group       Remove user from priviledged status       Remove User or Group       Revoke certificate       Revoke Role       Revoke User       Update Role       Update User   5. Access       User accessing a sensitive data object       User accessing multiple sensitive data objects       User action performed with sensitive data   6 Jobs and activity       Activity / Performance Indicators       Debug logs       System Errors/Warnings       System Power Down       System Power Up       System Service Start       System Service Stop  
Hello, Im trying to use the data from one search in another search.   This is what I'm trying to do:- index=index_example  sourcetype=sourcetype_example |chart count by Status |eval output=(count... See more...
Hello, Im trying to use the data from one search in another search.   This is what I'm trying to do:- index=index_example  sourcetype=sourcetype_example |chart count by Status |eval output=(count/ [index=index_example  sourcetype=sourcetype_example |stats count] *100) In the eval I just want [index=index_example  sourcetype=sourcetype_example |stats count] to be a single whole number
If you are waiting to hear back about the results of your Cybersecurity Defense Analyst Certification Beta Exam, we were targeting mid-October.  Due to unforeseen circumstances, the exam scoring has ... See more...
If you are waiting to hear back about the results of your Cybersecurity Defense Analyst Certification Beta Exam, we were targeting mid-October.  Due to unforeseen circumstances, the exam scoring has been delayed. As of today, we expect to share the results in about four to six weeks. Apologies for any inconvenience and thanks for your patience. You will receive an emailed Score Report from Pearson VUE with a pass/fail result. If you passed the exam, you will receive the certification and your digital badge, which will come from Credly (admin@credly.com). Good luck and hang in there as we work as quickly as possible to get these exams scored. 
How can business risk observability-fueled collaboration help your teams protect what matters most? Audrey Nahrvar's recent Blog discusses what it takes to protect the booming and increasingly... See more...
How can business risk observability-fueled collaboration help your teams protect what matters most? Audrey Nahrvar's recent Blog discusses what it takes to protect the booming and increasingly complex cloud native application landscapes. Read it here: Why collaboration is vital for mature security practices and how to achieve it Now, Security teams manage ever-expanding attack surfaces with fewer domain professionals on deck to address and remediate alerts. Explore three ways cross-functional collaboration can help organizations rise to these demands.    Additional resources NIST | National Security Awareness Month, Events & Theme Days for October 2023 Webinar | Levarage Business risk observability for cloud environments to protect what matters most  Cisco Secure Application product page  What is DevOps: Definition, Best Practices, DevSecOps tools, How AppDyanmics helps   About the Blog author   Audrey Nahrvar is a product marketing manager with a background in application security and ethical hacking. Audrey held positions at Autodesk and Shutterfly before joining AppDynamics in 2017, first as a Security Engineer and then promoted to Security Architect. In her off hours, Audrey enjoys spending time with her husky, in the mountains.
I have created the graph on hourly basis so it will display counts on the bar based on hours . Now my Requirement is I wanna Display one message on the Graph Totalcount=X 
Hi, I am trying to determine how to see what alerts are using specific indexes in Splunk?  Is there a way to search that? So if I wanted to see all alerts that are using index=firewall, for example,... See more...
Hi, I am trying to determine how to see what alerts are using specific indexes in Splunk?  Is there a way to search that? So if I wanted to see all alerts that are using index=firewall, for example, how would I get that?
The sixth leaderboard update for The Great Resilience Quest is out >> Check out the Leaderboard Don't lose your momentum! Shout out to our relentless fighters! Special shout-out to CriblM... See more...
The sixth leaderboard update for The Great Resilience Quest is out >> Check out the Leaderboard Don't lose your momentum! Shout out to our relentless fighters! Special shout-out to CriblMan, who showcased the prowess by making it onto both leaderboards this week! Remember, every two week presents a new opportunity for you to climb up the leaderboard. Keep playing, keep learning, and you might find your name highlighted in our next announcement! Thank you to everyone who has participated so far. Your engagement makes the Great Resilience Quest a huge success. Remain resilient, keep questing, and stay tuned for the upcoming new levels on Oct 20th!  Best regards, Splunk Customer Success
Hello,  I am searching to get results for each hour  top 1 max URL hits.  Iam using the below search but not getting results for each hour. index=*  | fields Request_URL _time | stats count as hits... See more...
Hello,  I am searching to get results for each hour  top 1 max URL hits.  Iam using the below search but not getting results for each hour. index=*  | fields Request_URL _time | stats count as hits by Request_URL _time |bucket span=1h _time | sort by hits desc | head 1 Thanks in advance!
I am looking for a query that can help me list or audit systems that are using default passwords or any other method you think I can use to audit my environment for default passwords.
Let's say im running a search where I want to look at domains traveled to. index=web_traffic sourcetype=domains domain IN ("*.com", "*.org*", "*.edu*") I want to do a count on how domains that have... See more...
Let's say im running a search where I want to look at domains traveled to. index=web_traffic sourcetype=domains domain IN ("*.com", "*.org*", "*.edu*") I want to do a count on how domains that have appeared less than 5 times over the search period. How can I accomplish this? I know I could do a stats count by domain but after that, I'm unable to grab the rest of the results in the index such as time, etc.  
I am looking to create a chart showing the average daily total ingest by month in Terabytes excluding weekends over the past year. For some reason I am struggling with this. Any help getting me start... See more...
I am looking to create a chart showing the average daily total ingest by month in Terabytes excluding weekends over the past year. For some reason I am struggling with this. Any help getting me started would be appreciated.