All Topics

Top

All Topics

Hi Team, I am using below query: index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" |... See more...
Hi Team, I am using below query: index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","") | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True when I selecting last 7 days its showing multiple columns . I want the value to be displayed according to drop down selected like last 7 days last 30 days. Can someone please guide me here
I am extracting these three values and if there is any empty value in any of the fields, it returns as no result. How i replace the blank values with NA in the rex statements   | rex field=Addtion... See more...
I am extracting these three values and if there is any empty value in any of the fields, it returns as no result. How i replace the blank values with NA in the rex statements   | rex field=AddtionalData "Business unit:(?<BusinessUnit>[^,]+)" | rex field=AddtionalData "Location code:(?<Locationcode>[^,]+)" | rex field=AddtionalData "Job code :(?<Jobcode>[^,]+)" | stats count by  BusinessUnit Locationcode Jobcode | fields - count
Hi All, I am looking for solution to integrate Splunk in AWS with HIPAA compliance. How this is setup ? Is private link required for Hipaa complaince?
Hello, I would like to use a subsearch to literally paste a command into the SPL e.g.:     | makeresults [| makeresults | eval test="|eval t1 = \"hello\"" | return $test]     and for it to be ... See more...
Hello, I would like to use a subsearch to literally paste a command into the SPL e.g.:     | makeresults [| makeresults | eval test="|eval t1 = \"hello\"" | return $test]     and for it to be equivalent to     | makeresults | eval t1 = "hello"       Is this possible?
Hello, I've made a dashboard with dashboard studio and uploaded some images. The issue I'm facing is that these images are not visible to other users with other roles. They have the dashboard permiss... See more...
Hello, I've made a dashboard with dashboard studio and uploaded some images. The issue I'm facing is that these images are not visible to other users with other roles. They have the dashboard permission as well and can access it, the only issue is with images. How can I fix this?
H, is there a way to turn an input playbook to an app? I have a playbook that gets an input, and does something. I am looking for a way to make it an app so there will be no need to activate anothe... See more...
H, is there a way to turn an input playbook to an app? I have a playbook that gets an input, and does something. I am looking for a way to make it an app so there will be no need to activate another playbook in order to make it work. also, it is a bit problematic to run a former playbook to activate the input playbook, because then I would have to edit the former playbook with the relevant input, while with app it would be much simpler    thank you in advance
Hi, How we can apply the color for the respective fields in this dashboard. source code : <title>Top Web Category blocked</title> <search> <query>index=es_web action=blocked host= * sourcetype= ... See more...
Hi, How we can apply the color for the respective fields in this dashboard. source code : <title>Top Web Category blocked</title> <search> <query>index=es_web action=blocked host= * sourcetype= * | stats count by category | sort 5 -count</query> <earliest>$time_range_token.earliest$</earliest> <latest>$time_range_token.latest$</latest> </search> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.chart">bar</option> <option name="charting.backgroundColor">#00FFFF</option> <option name="charting.fontColor">#000000</option> <option name="charting.foregroundColor">#000000</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"online-storage-and-backup":0x333333,"unknown":0xd93f3c,"streaming-media":0xf58f39,"internet-communications-and-telephony":0xf7bc38,"insufficient-content":0xeeeeee}</option> <option name="charting.legend.placement">right</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form> output: need a different  colors for all the fields, how we can achieve this  thanks
I need your support in finding a way to integrate web apps hosted in the Azure cloud with Splunk. As i tried using many add-ons from Splunk base but I did not find this option so please if anyone kno... See more...
I need your support in finding a way to integrate web apps hosted in the Azure cloud with Splunk. As i tried using many add-ons from Splunk base but I did not find this option so please if anyone knows how to integrate to get the logs, let me know. Thank you all.
Hi All, We have approximately 100 Splunk Universal Forwarders (UFs) installed at a remote site, and we're interested in setting up a Heavy Forwarder (HF) at that location to forward the data to the ... See more...
Hi All, We have approximately 100 Splunk Universal Forwarders (UFs) installed at a remote site, and we're interested in setting up a Heavy Forwarder (HF) at that location to forward the data to the indexers from the UFs. Additionally, we plan to deploy the deployment server on the same virtual machine (VM). Based on the documentation, it appears that a deployment server can be co-located with another Splunk Enterprise instance as long as the deployment client count remains at or below 50. We would like to better understand the rationale behind this limitation of 50 clients and why it is not possible to manage more than 50 clients by adding another component of Splunk Enterprise ?   Regards VK
I'm looking for the regular expression wizards out there. I need to do a rex with two capture groups: one for name, and one for value. I plan to use the replace function, and throw everything else aw... See more...
I'm looking for the regular expression wizards out there. I need to do a rex with two capture groups: one for name, and one for value. I plan to use the replace function, and throw everything else away but those two capture groups (e.g., "\1: \2"). Here are some sample events. name="Building",value="Southwest",descendants_action="success",operation="OVERRIDE" name="Building",value=["Northeast","Northwest"],descendants_action="failure",operation="OVERRIDE" name="Building",value="Southeast",descendants_action="success",operation="OVERRIDE" name="Building",value="Northwest" name="Building",value="Northwest",operation="OVERRIDE" So far I just have this. ^name=\"(.*)\",value=\[?(.*)\]? Any ideas?
      Are you a pro with Python? Do you enjoy creating your own apps for Splunk SOAR? Know your way around VS Code? If you answered yes to any of those questions, then this is the ses... See more...
      Are you a pro with Python? Do you enjoy creating your own apps for Splunk SOAR? Know your way around VS Code? If you answered yes to any of those questions, then this is the session for you! Join Splunk Applications and Systems Engineer, Daniel Federschmidt as he shares the latest on the Visual Studio Code Extension for Splunk SOAR and see how you can make developing apps a breeze. The Splunk SOAR Extension for VS Code works with on-prem and cloud deployments and its goal is to make the app development experience as seamless and efficient as possible on the VS Code editor platform. The extension pulls information from Splunk SOAR and allows developers to perform common operations such as browsing of remote objects, running actions and managing resulting action runs.
Hi,  I am having an issue with my data ingestion. I have a xml log file that I am ingesting that is 1GB in size but is taking up to 18GB of my ingestion license. How do I correct this. I did a tes... See more...
Hi,  I am having an issue with my data ingestion. I have a xml log file that I am ingesting that is 1GB in size but is taking up to 18GB of my ingestion license. How do I correct this. I did a test to convert the data to json but I am still seeing mix matches between the log file size and the data being ingested into Splunk.  I am checking on the daily ingestion vs daily log size 
What I am trying to do is graph / timechart active users.   I am starting with this query: index=anIndex sourcetype=perflogs  | rex field=_raw "^(?:[^,\n]*,){2}(?P<LoginUserID>\w+\.\w+)" | timecha... See more...
What I am trying to do is graph / timechart active users.   I am starting with this query: index=anIndex sourcetype=perflogs  | rex field=_raw "^(?:[^,\n]*,){2}(?P<LoginUserID>\w+\.\w+)" | timechart distinct_count(LoginUserID) partial=false This works and the resulting graph appears to be correct for 120 mins resulting in 5min time buckets.  Then if I shorten the time period down to 60 mins resulting in 1 min buckets then I have a question. In the 120 min graph with 5 min buckets @ 6:40-6:45 I have 318 Distinct Users but in the 90 min graph with 1 min buckets each 1 min bucket has 136, 144, 142, 131, 117 Distinct Users. I understand that a user can be active one minute and inactive the next min or two and then active again on the 4th/5th min which is what is happening? My question is how to get this to show across the one minute bin's users that were active in the previous 5, 1 min buckets resulting in a # that represents users that are logged in and not just active ? I believe I can add minspan=5min as a kludge but am wondering if there is a way to get this do what im trying to show at the 1min span ? I believe what I need to do is run two queries the first one as is above, then use an append that will query for  events from -5min to -10min.   But, from what I have been trying it either is not working or not doing it correctly. Basically im trying to find those userID's that are active in the first time bucket (1 min) that were also active in the previous time bucket(s) then do a distinct_count(..) on the usersID's collected from both queries ?
The seveth leaderboard update (10.12-10.25) for The Great Resilience Quest is out >>  Check out the Leaderboard   A big shoutout to those players who have made it onto both boards! We appreciate... See more...
The seveth leaderboard update (10.12-10.25) for The Great Resilience Quest is out >>  Check out the Leaderboard   A big shoutout to those players who have made it onto both boards! We appreciate your time and efforts.  The last two chapters of the "Great Resilience Quest" were released last Friday! There is never been a better time to join or rejoin the game. Secure your spot on the leaderboard and vie for the Champion’s Tribute! We strongly recommend diving into both the Security Saga and Observability Chronicle to boost your chances of winning prizes. Plus, it is a great opportunity to deepen your understanding of various use cases and enhance your digital resilience. The quest is heating up! Stay on course, brave adventurers! Best regards, Splunk Customer Success
We have a Splunk v9.1.1 cluster with a three search head SHC running on EC2 instances in AWS. In implementing disaster recovery (DR) for the SHC, I configured AWS Autoscaling to replace the search he... See more...
We have a Splunk v9.1.1 cluster with a three search head SHC running on EC2 instances in AWS. In implementing disaster recovery (DR) for the SHC, I configured AWS Autoscaling to replace the search heads on failure. Unfortunately with Autoscaling, it does NOT re-use the IP of the failed instance on the new instance, probably due to other use cases of up- and down-scaling. So new replacement instances will always have new/different IPs than that of the failed instance. Starting with a healthy cluster with an elected search head captain and RAFT running, I terminated one search head. During the minute or two that it took AWS Autoscaling to replace the search head instance, RAFT stopped and there was no captain. I was then unable to add a NEW third search head to the cluster. OK, so then I created a similar scenario but this time had Autoscaling issue the commands to force one of the remaining two search heads to be an UN-ELECTED static captain - and then confirmed this had worked; I had two search heads, one being a captain. In the Splunk documentation, it mentions using a Static Captain for DR. However, when I again tried to add the new instance as the third search head, I again received the error that RAFT was not running, there was no cluster, and therefore the member could not be added! So what is Splunk's recommendation for Disaster Recovery in this situation? I understand this is a chicken-and-egg scenario, but how are you expected to recover if you can't get a third search head in place in order TO recover? It seems counter-intuitive that Splunk would disallow adding a third search head, especially with the static search head captain in place. There are some configurable timeout parameters in server.conf in the [shcluster] stanza - would increasing any of these values keep the SHC in place long enough for Autoscaling to replace that third search head instance such that it can then join the SHC? If so, which timeouts should I use, and which values would be appropriate that they wouldn't interfere with the day-in, day-out usage? I'm stuck on this and haven't been able to progress any further. Any and all help is greatly appreciated!
Hello again splunk experts This is my current situation:- job_no                field4 131                      string1                                string2 132                      string3  ... See more...
Hello again splunk experts This is my current situation:- job_no                field4 131                      string1                                string2 132                      string3                               string4 |table job_no, field2, field4|dedup, job_no, field2 |stats count dc(field4) AS dc_field4 by job_no |eval calc=dc_field4 * count produces:- job_no                                       field2                                        dc_field4                              calc 131 6 2 12 132 6 2 12 This all works fine.  The problem is that I also want to include the strings (string1,string2,string3,string4) in my table.  Like this:- job_no                                                                   field4                                                               field2       dc_field4     calc 131 string1, string2 6 2 12 132 string3, string4 6 2 12   Any help would be greatly appreciated,
I created a dashboard with a query looks like this :  index=cbclogs sourcetype = cbc_cc_performance source="/var/log/ccccenter/performancelog.log" Company IN ($company_filter$) LossType IN ($losstyp... See more...
I created a dashboard with a query looks like this :  index=cbclogs sourcetype = cbc_cc_performance source="/var/log/ccccenter/performancelog.log" Company IN ($company_filter$) LossType IN ($losstype_filter$) QuickClaimType IN ($QCT_filter$) |eval minsElapsed=round(secondsElapsed/60,0)| timechart median(minsElapsed) by LOB. Suppose LOB has string values like :  "A", "B", "C", "D" ,"E","F","G" ,"H", currently , all values will be shown in the Y axis on the right side , how can I combine "A","B","C" as "A" , "D","E","F" as "E" and "G","H" as "G", so , the right side Y axis has only three values and won't affect the correctness of the dashboard. Actually , I am not sure whether should I call this right side colourful column Y axis.           Thanks a lot !
Integrate AppDynamics with the CICD pipeline using queries that retrieve metrics to contain potential change failures and protect the business from their impact   CONTENTS | Introduction | Vi... See more...
Integrate AppDynamics with the CICD pipeline using queries that retrieve metrics to contain potential change failures and protect the business from their impact   CONTENTS | Introduction | Video |Resources | About the presenter  Video Length: 2 min 22 seconds  How can continuously querying *Cloud Native Application Observability using Flagger (a feature management tool) give you the key metrics you need to see whether your production environment change was successful? In this demonstration, see how Flagger—a feature management tool—can continuously query Cloud Native Application Observability for the key metrics you can use to determine whether a change was successful. Use AppDynamics’ free plugin to view these metrics in a Grafana dashboard. When Cloud Native Application Observability indicates increasing error rates after a change, see how to roll back the change or turn off the feature flag.   As of November 27, 2023, the Cisco Full-Stack Observability (FSO) Platform is now the Cisco Observability Platform, and Cloud Native Application Observability is now Cisco Cloud Observability powered by the Cisco Observability Platform. These name changes better align our products with the Cisco portfolio and with our business strategy. Additional Resources  Read more about the free Grafana plugin in the AppDynamics Blog  Learn more about Anomaly Detection in the documentation, including  Health Rules and Configuration of Alerts, under Monitor Entity Health Monitor Anomalies  Determine the root cause of an anomaly  About presenter Charles Lin Charles Lin, Cisco AppDynamics Field Domain Architect Charles is a Field Domain Architect at Cisco AppDynamics. He joined Cisco as a Senior Sales Engineer in 2019. Since then, he has helped large enterprises and financial sector customers improve their monitoring practices.   As a Field Domain Architect, he focuses on Cloud Native and Open Telemetry best practices and helping fellow team members overcome technical challenges. He holds multiple patents in the area of IT Monitoring and Operations and is a certified Cisco DevNet Associate. 
Missing titleband from the search It seems like its a subsearch of ldap query or something if I do Get-ADUser in powershell its missing from there too here is what we had from event logs : |tab... See more...
Missing titleband from the search It seems like its a subsearch of ldap query or something if I do Get-ADUser in powershell its missing from there too here is what we had from event logs : |table titleband adminDescription cn co company dcName department description displayName division eventtype georegion givenName host locationCode mail mailNickname sAMAccountName title userAccountControl userAccountPropertyFlag userPrincipalName
We want to implement MFA for login to our Splunk enterprise serves. currently we are using LDAP authentication method.  is it possible to implement MFA on top of LDAP?