All Topics

Top

All Topics

I am looking to create an acronym from a dynamic string, by capturing the first letter of each broken substring How do I write the script, so I can capture whatever number of substrings gets generat... See more...
I am looking to create an acronym from a dynamic string, by capturing the first letter of each broken substring How do I write the script, so I can capture whatever number of substrings gets generated from the original string?     ie. "Hello_World_Look_At_Me" => "HWLAM" "Hello_World" => "HW"   I'm thinking of doing the following, but this seems to be pretty lengthy.  Would like to know if there's a more efficient way of getting this done. | eval txt1 = "Hello_World_Look_At_Me" | eval tmp = split(txt1, "_") | eval new_word = substr(mv_index(tmp,1), 1) + ...    
considering we can move data from one index to another index in the same cluster by moving buckets, I am in a scenario where to create an index on index cluster 2 with a new name with an increased re... See more...
considering we can move data from one index to another index in the same cluster by moving buckets, I am in a scenario where to create an index on index cluster 2 with a new name with an increased retention time period, how do I move the data of one index which is in say cluster 1 with replication factor 3 i.e., the indexer has a copy of every bucket in other two indexers under different names, which can not be copied to the new index with a different name in cluster 2. Here the challenge is to identify the replicated buckets to avoid being copied to a new index on another cluster, so we only copy the primary buckets to cluster 2 and allow cluster  2 to create the replication buckets based on its own rep factor.   was this achievable? either UI or CLI? how? If I only want to copy data of specific sourcetype from index 1 from cluster 1 to index 2 in cluster 2 how can I do that? NOTE: I can not create the index 2 with same name as index1, it's been created and under use for other data.   How can I acive this?  
Hi, How to create automatic tag if: eventtypes.conf [duo_authentication] search = sourcetype=json:duo type=authentication tags.conf [eventtype=duo_authentication] authentication = enabled I... See more...
Hi, How to create automatic tag if: eventtypes.conf [duo_authentication] search = sourcetype=json:duo type=authentication tags.conf [eventtype=duo_authentication] authentication = enabled I also add to admin, user, power roles "default: index=index_of_duo". But, simply it want add tag (dont understand why if above eventtype search is working)
October 2023 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another edition of indexEducation, the newsletter that takes an untraditional twist on wha... See more...
October 2023 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another edition of indexEducation, the newsletter that takes an untraditional twist on what’s new with Splunk Education. We hope the updates about our courses, certification, and technical training will feed your obsession to learn, grow, and advance your careers. Let’s get started with an index for maximum performance readability: Training You Gotta Take | Things You Needa Know | Places You’ll Wanna Go  Training You Gotta Take Nine Hours with 9.1 | Troubleshooting Splunk Enterprise 9.1 Are you ready to tackle the mysteries of Splunk Enterprise 9.1? Then you’re ready for our latest course, Troubleshooting Splunk Enterprise 9.1. This 9-hour, paid instructor-led course designed for Splunk administrators covers topics and techniques for troubleshooting a standard Splunk distributed deployment using the tools available with Splunk Enterprise. This hands-on, lab-oriented class will help you gain troubleshooting experience before attending more advanced courses. Debug a distributed deployment today! Gotta Learn to Troubleshoot | Splunk Enterprise 9.1  Splunk Observability | Free O11y Education Courses Let’s make digital downtime a thing of the past! The Splunk Observability unified solutions suite can improve digital resilience by lowering the cost of unplanned downtime. For literally zero dollars and a small investment in time, you can learn how to maximize your organization’s O11y investment with these Splunk Education self-paced courses. Start with Optimizing Metrics Usage with Splunk Metrics Pipeline to learn how best to optimize your data intake into Splunk Observability Cloud using Splunk Metrics Pipeline Management. Move onto Introduction to Log Observer Connect to learn how to discover trends in log data and use the product for root cause analysis. You gotta see it to believe it.  Gotta Be Free | Dive in to See Things You Needa Know Cyber Defenders | We Got You The Biden Administration established a National Cyber Workforce and Education Strategy (NCWES) to tackle the shortage of cybersecurity pros in the U.S. by boosting cyber skills education. At Splunk, we proudly align with the NCWES – offering robust cybersecurity training programs, certification tracks, and free learning opportunities. Find out about our collaborative efforts to bridge the skills gap and fortify our digital future. Together, we can safeguard our communities and embrace the boundless possibilities of technology. Needa Know What’s Next | Cyber Workforce Advocacy Blog Splunk Training Units | Get Your Pass to Class To get hands-on training in the lab, you’ll need to move from free to fee! If you’re looking to take advantage of Splunk eLearning with labs or Instructor-led training (ILT) courses, check in with your Customer Organization Manager who helps allocate training units (TUs) and tracks usage for learners like you. Then, consider using some training units to take our new instructor-led training, Using SignalFlow in Splunk Observability Cloud, which covers how to use the SignalFlow analytics language in Splunk Observability Cloud. Enroll through STEP, confirm your TUs, and then take your ILT.  Needa Use Your TUs | Paid Training Awaits Places You’ll Wanna Go To the Lecture Hall |  Free Training for Universities What’s on the syllabus this week? Splunk training and cyber education. Bridging the growing cyber skills gap is critical to securing our digital world, which is why we continue to grow and scale our offerings within the Splunk Academic Alliance Program to better serve students, faculty, and staff within non-profit universities, colleges, and schools. Currently, we offer 21 free, self-paced courses with hands-on labs as part of our commitment to training the next generation of cyber experts. The experience sets up the next generation of cyber experts.   Wanna Go to Academic Alliance | Training for the next generation  To the Community | Devesh Logendran, Splunk, and the Singapore Cyber Conquest Look who’s winning. Say hello to a member of the Splunk Community who, early in his career, is taking Splunk to the next level. Head over to the Splunk Community to read the story about how Devesh Logendran went to college, learned a bit of Splunk, captured first place in the Singapore Cyber Conquest, and parlayed that into a free conference pass to Splunk University! Can you imagine where Splunk can take you? Wanna Go to Great Heights | Learn How with Devesh Find Your Way | Learning Bits and Breadcrumbs   Go to YouTube | Once Upon an Attack Go Get Rewarded | Learning Points are Waiting to be Redeemed  Go to STEP | Get Upskilled Go Discuss Stuff | Join the Community Go Social | LinkedIn for News Go Share | Subscribe to the Newsletter   Thanks for sharing a few minutes of your day with us – whether you’re looking to grow your mind, career, or spirit, you can bet your sweet SaaS, we got you. If you think of anything else we may have missed, please reach out to us at indexEducation@splunk.com.  Answer to Index This: A shirt.  
I don't want to send an alert because I want the benefits of a report (all results in one file as opposed to sending an alert for each hit on the search), so I'm trying to figure out how to send a re... See more...
I don't want to send an alert because I want the benefits of a report (all results in one file as opposed to sending an alert for each hit on the search), so I'm trying to figure out how to send a report but only if it has results. If it has zero results, I don't want it to send. 
Hi Is there anyway to find transaction flow like this i have log file contain 50 million transactions like this 16:30:53:002 moduleA:[C1]L[143]F[10]ID[123456] 16:30:54:002 moduleA:[C2]L[143]F[20]I... See more...
Hi Is there anyway to find transaction flow like this i have log file contain 50 million transactions like this 16:30:53:002 moduleA:[C1]L[143]F[10]ID[123456] 16:30:54:002 moduleA:[C2]L[143]F[20]ID[123456] 16:30:55:002 moduleB:[C5]L[143]F[02]ID[123456] 16:30:56:002 moduleC:[C12]L[143]F[30]ID[123456] 16:30:57:002 moduleD:[C5]L[143]F[7]ID[123456] 16:30:58:002 moduleE:[C1]L[143]F[10]ID[123456] 16:30:59:002 moduleF:[C1]L[143]F[11]ID[123456] 16:30:60:002 moduleZ:[C1]L[143]F[11]ID[123456]   need to find module flow for each transaction and find rare flow.   challenges: 1- there is no specific “key value” exist on all lines belong that transaction.  2-only key value that exist on all line is ID[123456].  3-ID might be duplicated and might belong several transactions. 4-module name not have specific name and there are lots of module names. 5-ID not fixed position (end of each line) any idea? Thanks
Hello @kamlesh_vaghela, This is with regards to your solution posted on the below thread: - https://community.splunk.com/t5/Splunk-Search/How-to-apply-the-predict-function-for-the-most-varying-fiel... See more...
Hello @kamlesh_vaghela, This is with regards to your solution posted on the below thread: - https://community.splunk.com/t5/Splunk-Search/How-to-apply-the-predict-function-for-the-most-varying-field/m-p/422163 I have relatively similar use case, I have multiple columns, the first column is of _time and the remaining column fields are distinct having numeric data for each timestamp. I need to compute the forecast value using the predict command. I tried to use your approach of looping through fields using foreach and then passing it to predict command. However, it takes only one field and its values and computes the forecast value. I need to calculate the same for all the fields returned by the timechart command. Thus, it would be very helpful to seek your inputs on the same. Thank you Taruchit
Hello, by default, DMA summaries are not replicated between nodes in indexer cluster (for warm and cold buckets). I wonder how command tstats with summariesonly=true behaves in case of failing one n... See more...
Hello, by default, DMA summaries are not replicated between nodes in indexer cluster (for warm and cold buckets). I wonder how command tstats with summariesonly=true behaves in case of failing one node in cluster. Imagine, I have 3-nodes, single-site IDX cluster in deafult setting. What happened, when one node fails (so summaries on that node are not available) and I run search using "|tstats summariesonly=true..." on this cluster? If search spans data from primary warm or cold buckets on failed node, will I get incomplete data, right? (I think so, because appropriate summaries are missing). And if so, will I get any error message on search page? And how it change in case of multi-site cluster? I assume in case of failing one node, I should get complete data, becuase AFAIK in multi-site cluster every site has primary copy of bucket with DMA summaries. Is it right or not? I need this info because of one project I am working on. Thank you for answers. Best regards Lukas Mecir
Hi Splunkers,    1) I wanted to remove all special characters from my field called "Test" other than "."(dot) and "-"(dash) 2) return the values in lower case. example field values for Test i4... See more...
Hi Splunkers,    1) I wanted to remove all special characters from my field called "Test" other than "."(dot) and "-"(dash) 2) return the values in lower case. example field values for Test i4455.mango.com qa (qa_a_ai_bi1_integration_d01) app-9999-bee-mysql-prod   please help
Hi all, I am using Splunk Enterprise Security and having trouble converting the indexes to CIM compliance. One of them is Cloudflare. The JSON data is being ingested via an AWS S3 bucket, and the v... See more...
Hi all, I am using Splunk Enterprise Security and having trouble converting the indexes to CIM compliance. One of them is Cloudflare. The JSON data is being ingested via an AWS S3 bucket, and the visualization works fine on the Cloudflare App. However, the CIM Validator doesn't recognize the events and is unable to be used in Splunk ES. Has anyone been able to successfully convert these events to be CIM compliant? Thanks,
Hello I understand that you access the monitoring web through the launch controller button on the account page. I received the license today and proceeded with the installation, but two errors oc... See more...
Hello I understand that you access the monitoring web through the launch controller button on the account page. I received the license today and proceeded with the installation, but two errors occur as follows. 1. <h1>500 Internal Server Error</h1><br/>Exception <br/> 2. : HttpErrorResponse <html><body><h1>500 Internal Server Error</h1><br/>Exception <br/></body></html> Http failure response for https://chaplinappdynamics.com/controller/restui/containerApp/mainNavConfig: 500 Internal Server Error I didn't click "Use local login", I clicked "Next". Can you tell me what the problem is? Thank you.
I have a field called position that contains integers and a token called position_select that is either a floating point number or a * (=all positions). Now i want to search all positions that match... See more...
I have a field called position that contains integers and a token called position_select that is either a floating point number or a * (=all positions). Now i want to search all positions that match position_select. So i tried something like that: index = index1 | eval position_search = floor($position_select$) | where position = position_search The problem is that you of course can't use * in floor. Another problem is that | where position = * is impossible too. However i cant use | search because | search position = position_search  does not work.   So the question is, is there any way to use something like floor()  on position_select?  
Hi, We receive daily emails with lists of IOC's for malware and phishing alerts, each email may contain multiple ip address, domains and email addresses and we are trying to extract these to run se... See more...
Hi, We receive daily emails with lists of IOC's for malware and phishing alerts, each email may contain multiple ip address, domains and email addresses and we are trying to extract these to run searches against out web and email logs.  I have the regex working for extraction but it will only extract the first match. I've tried multiple ways of achieving this without success, the current config is: Props.conf EXTRACT-IOCURL = (?P<IOCURL>[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9][\[][\.|@][\]][^\s]{2,}|[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9][\[][\.|@][\]][^\s]{2,}|[a-zA-Z0-9]+[\[][\.|@][\]][^\s]{2,}|[a-zA-Z0-9]+[\[][\.|@][\]][^\s]{2,}) EXTRACT-IOCIP = (?P<IOCIP>\d{1,3}\[\.\]\d{1,3}\[\.\]\d{1,3}\[\.\]\d{1,3}+) The indexed email looks like this.... .... Domains comprised[.]site badsite[.]studio malware[.]live IP addresses 192[.]254[.]71[.]78 192[.]71[.]27[.]202 193[.]182[.]144[.]67  ....   but the current config will only extract the first record for each: IOCURL - comprised[.]site and  IOCIP  - 192[.]254[.]71[.]78. Any ideas how to extract all the domains and IP addresses? Thanks 
Hello community We installed the CIM app in our SH cluster. The installation seemed to work as expected, all defaults no modifications. Afterwards when trying to launch the app we landed on a "ta_ni... See more...
Hello community We installed the CIM app in our SH cluster. The installation seemed to work as expected, all defaults no modifications. Afterwards when trying to launch the app we landed on a "ta_nix_configuration" page, trying to access the "cim_setup" page we got a 404. We removed the app, rolled all members and re-installed. Once again it all seemed to work just fine. This time around we can access the cim_setup page, though if we try to access the "<baseurl>/app/Splunk_SA_CIM/" directly or use the "launch app" link in the GUI we land on the "<baseurl>/app/Splunk_SA_CIM/ta_nix_configuration" site. Is this somehow the expected behaviour or have we got some crossed wires somewhere?
I am having two counts in the dashboard one is the total count and other is error count to get the success count I want the difference. How can we do that. index=US_WHCRM_int   (sourcetype="bmw-crm-... See more...
I am having two counts in the dashboard one is the total count and other is error count to get the success count I want the difference. How can we do that. index=US_WHCRM_int   (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=INFO ("*Element*: bmw-cm-wh-xl-cms-contractWithCustomers-flow/processors/2/processors/0 @ bmw-crm-wh-xl-cms-int-api:bmw-crm-wh-xl-cms-api-impl/bmw-cm-wh-xl-cms-contractWithCustomers*") OR "*flow started put*contractWithCustomers" OR "*flow started put*customers:application*" OR "ERROR Message" OR "flow started put*contracts:application*") OR (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=ERROR "Error Message") | rex field=message "(?<json_ext>\{[\w\W]*\})" | rex field=message "put:\\\\(?<Entity>[^:]+)" | rename attributes{}.value.details as details | rename properties.correlationId as correlationId | table _time properties.* message json_ext details Entity | spath input=json_ext | stats count by Entity Using | stats count by Entity and | stats count by title  I am getting two counts how can I find the difference between the Entity and title count
I have a Splunk Enterprise Cluster that doesn't get new data ingested anymore. But the existing indexes should remain searchable for a while still. Since search usage is only sporadic I'd like to sav... See more...
I have a Splunk Enterprise Cluster that doesn't get new data ingested anymore. But the existing indexes should remain searchable for a while still. Since search usage is only sporadic I'd like to save on infrastructure cost and hibernate the whole cluster. Only bring it up again when someone needs to search the old data, and hibernate the cluster again.   How would I do this best? My environment consists of search head cluster with 2 members and an indexer cluster with 6 members. My understanding is, as soons as I start to stop indexers, the cluster would try to rebalance the data in the remaining indexer nodes. That seems suboptimal since I need to stop all the instances eventually and don't want to end up with a single indexer node holding all the data.   Any ideas?  
Hi all, I have a csv file which contains the some of the counts. I have a set of groups like a,b,c,d which are the values of multiselect dropdown and it is dynamic and the columns in the csv is new_... See more...
Hi all, I have a csv file which contains the some of the counts. I have a set of groups like a,b,c,d which are the values of multiselect dropdown and it is dynamic and the columns in the csv is new_a_Total,new_a_added,new_a_removed,new_b_Total,new_b_added,new_b_removed,new_c_Total,new_c_added,new_c_removed,new_b_Total,new_b_added,new_b_removed. When i select more than one value from the multiselect dropdown i want to add all the respective total, added and removed of the selected groups to show it in the timechart. For example if a and b is selected, it should add new_a_Total and new_b_Total will be renamed as Total, new_a_added and new_b_added will be renamed as added, like this all the respective datas will be added and shown as single result. How can i achieve this? Currently i am trying it out using for each. Any suggestions would be really helpful.
Dear All, Please suggest how to create separate incident review dashboard for different team. OR How the notable will separated base on Teams.  i.e. Windows Team - Windows Team can only check wind... See more...
Dear All, Please suggest how to create separate incident review dashboard for different team. OR How the notable will separated base on Teams.  i.e. Windows Team - Windows Team can only check windows related notable  Unix Team -Linux Team can only check Unix related notable  SOC Team - Soc Team can check all the notable 
Hello community, I'm encountering a problem that's probably simple to correct, but no matter how hard I try, I can't do it. I have a query that returns several results that I count according to the ... See more...
Hello community, I'm encountering a problem that's probably simple to correct, but no matter how hard I try, I can't do it. I have a query that returns several results that I count according to the time range. This allows me to provide a graph showing the hourly load. However, I noticed that when there was no result over a time range (for example between 3:00 a.m. and 4:00 a.m.), the graph does not appear in full, I am missing the time range in question : Here is my current query: index="oncall_hp" currentPhase=UNACKED routingKey=*event* entityDisplayName!=*Local-Self-Monitoring* | dedup incidentNumber | eval Heure = strftime(_time, "%H") | stats count by Heure | rename count AS Events | sort Heure I tried to force the appearance of a "0" value if there was nothing but that didn't change: index="oncall_hp" currentPhase=UNACKED routingKey=*event* entityDisplayName!=*Local-Self-Monitoring* | dedup incidentNumber | eval Heure = strftime(_time, "%H") | stats count by Heure | rename count AS Events | eval Events=if(isnull(Events) OR len(Events)==0, "0", Events) | sort Heure   I looked on the forum to see if other people had had this problem but I couldn't find it (or I didn't look well). Do you have an idea to simply add a "0" value if a time slot is empty, and that adds it to the graph? Best regards, Rajaion
Hello    The Splunkd Services are not working after starting/restarting the services and it is getting stopped, I have tried several times. So, could you please help me to sort it out from this iss... See more...
Hello    The Splunkd Services are not working after starting/restarting the services and it is getting stopped, I have tried several times. So, could you please help me to sort it out from this issue. Thanks in advance.