All Topics

Top

All Topics

In our organization, Cybersecurity has been the driver for our Splunk implementation. Many of the initial use cases have been around challenges shared with the network and service desk teams.  More r... See more...
In our organization, Cybersecurity has been the driver for our Splunk implementation. Many of the initial use cases have been around challenges shared with the network and service desk teams.  More recently, some of our application system owners are looking for ways to leverage Splunk. How about you?
  Watch On Demand  Are you curious for more feedback on how customers navigate your critical web services? Still a hassle to correlate real user interactions with server-side performance and ... See more...
  Watch On Demand  Are you curious for more feedback on how customers navigate your critical web services? Still a hassle to correlate real user interactions with server-side performance and transactions? If so, then you are in the right place! Join us to uncover how Splunk’s Digital Experience Monitoring (DEM) capabilities prioritize improvements that ensure exceptional end-user experiences and maximize business outcomes.  Tune in to learn about: Testing to optimize performance from the user perspective Troubleshooting with in-context video analysis  Prioritizing impact and taking action to immediately improve end user experiences Who will benefit: Director of Engineering, Platform Engineer, Developer, Software Engineer, Site Reliability Engineer, Application Support Engineer, Chief Technology Officer, Director of Web Strategy, Director of Web Experience, Director of Digital Web Technology, Director of Web Architecture, Director of Web Content, Director of Global Web Development, Core Web Developer, director of front end development, Director of Front-End Engineering, IT Operations Engineer, IT Operations Analyst, Splunk Administrator  
below is my search query index="inm_inventory" |table inventory_date, region, vm_name, version |dedup vm_name | search vm_name="*old*" OR vm_name="*restore*" output as below : The challenge ... See more...
below is my search query index="inm_inventory" |table inventory_date, region, vm_name, version |dedup vm_name | search vm_name="*old*" OR vm_name="*restore*" output as below : The challenge here is each vm_name has different suffix added and its not standard since any user adds any comment to to it so it could be anything. how do i perform look for the vm names since lookup file only has hostnames and no suffix. i have a lookup file named itso.csv which has details like hostname(all in lower case), tier, owner, country. I want to use lookup in my main search for the fields tier, owner, country end requirement is to do lookup for the vm_name in itso.csv file and add details like tier, countrycode, owner in the main search output.
Hi All, I have 2 similar queries as below to get the total host count and host count that are affected:   Query 1: To get total host count .... | rex field=_raw "(?ms)]\|(?P<host>\w+\-\w+)\|" |... See more...
Hi All, I have 2 similar queries as below to get the total host count and host count that are affected:   Query 1: To get total host count .... | rex field=_raw "(?ms)]\|(?P<host>\w+\-\w+)\|" | rex field=_raw "(?ms)]\|(?P<host>\w+)\|" | rex field=_raw "\]\,(?P<host>[^\,]+)\," | rex field=_raw "\]\|(?P<host>[^\|]+)\|" | rex field=_raw "(?ms)\|(?P<File_System>(\/\w+){1,5})\|" | rex field=_raw "(?ms)\|(?P<Disk_Usage>\d+)" | rex field=_raw "(?ms)\s(?<Disk_Usage>\d+)%" | rex field=_raw "(?ms)\%\s(?<File_System>\/\w+)" | regex _raw!="^\d+(\.\d+){0,2}\w" | regex _raw!="/apps/tibco/datastore" | rex field=_raw "(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\s\d" | rex field=_raw "\[(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\]" | rex field=_raw "(?ms)\d\s(?<Total>\d+(\.\d+){0,2})\w\s\d" | rex field=_raw "(?ms)G\s(?<Used>\d+(\.\d+){0,2})\w\s\d" | eval Available=(Total-Used) | eval Time_Stamp=strftime(_time, "%b %d, %Y %I:%M:%S %p") | lookup Master_List.csv "host" | search "Tech Stack"=* | search Region="GC" | search Environment=* | search host=* | search File_System=* | search Disk_Usage=* | stats count by host | stats count as Total Query 2: To get the affected host count .... | rex field=_raw "(?ms)]\|(?P<host>\w+\-\w+)\|" | rex field=_raw "(?ms)]\|(?P<host>\w+)\|" | rex field=_raw "\]\,(?P<host>[^\,]+)\," | rex field=_raw "\]\|(?P<host>[^\|]+)\|" | rex field=_raw "(?ms)\|(?P<File_System>(\/\w+){1,5})\|" | rex field=_raw "(?ms)\|(?P<Disk_Usage>\d+)" | rex field=_raw "(?ms)\s(?<Disk_Usage>\d+)%" | rex field=_raw "(?ms)\%\s(?<File_System>\/\w+)" | regex _raw!="^\d+(\.\d+){0,2}\w" | regex _raw!="/apps/tibco/datastore" | rex field=_raw "(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\s\d" | rex field=_raw "\[(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\]" | rex field=_raw "(?ms)\d\s(?<Total>\d+(\.\d+){0,2})\w\s\d" | rex field=_raw "(?ms)G\s(?<Used>\d+(\.\d+){0,2})\w\s\d" | eval Available=(Total-Used) | eval Time_Stamp=strftime(_time, "%b %d, %Y %I:%M:%S %p") | lookup Master_List.csv "host" | search "Tech Stack"=* | search Region="GC" | search Environment=* | search host=* | search File_System=* | search Disk_Usage>=80 | stats count by host | stats count as Total   I am able to get both the host count and create a dashboard panel in "single value" visualization in trellis layout separately. But I want to get both the host counts in one panel in trellis layout. (something like shown in the sample attachment) Please help to modify/create the query to get both host counts in one panel in the dashboard.   Your kind consideration is highly appreciated..!! Thank You..!!  
I have mstats query it was working fine till last week but suddenly now the success count is not showing up correctly. How to troubleshoot this issue???
Hi, I want to create an alert that triggers when a user_name  exist in a lookup table (e.g. group_names.csv). But I'm not sure how to create the search string for this. The fields I'm using in the g... See more...
Hi, I want to create an alert that triggers when a user_name  exist in a lookup table (e.g. group_names.csv). But I'm not sure how to create the search string for this. The fields I'm using in the group_names.csv lookup table is group_names type as follows: If the user_name matches group_names listed in the table, the alert should triggered. Any help on how to do this are much appreciated. Thanks..
Hi all, i have a question Index= app-data "cgth14678ghj"  host= http:jbossserver source=application_data_http:jbossserver-20210102-10.log   When i search with this query in will get  events in ... See more...
Hi all, i have a question Index= app-data "cgth14678ghj"  host= http:jbossserver source=application_data_http:jbossserver-20210102-10.log   When i search with this query in will get  events in Splunk But when i see on the host side there are no events with this term cgth14678ghj on the source file How come there are displaying on splunk without being in server From were splunk is taking this data which is not there in server.   Can any help me on this???
Using the "virustotal" cmd and it appears that if there are multiple events that have the same file_hash that only one of the events will "populate" the field/values from the virustotal cmd.  I can't... See more...
Using the "virustotal" cmd and it appears that if there are multiple events that have the same file_hash that only one of the events will "populate" the field/values from the virustotal cmd.  I can't post events. Example would be: event 1: _time=08/06/2023 07:00:00 dest=abc1 file_hash=45vv678 file_name=badguy.dll file_path=my_path vt_* will be populated event 2: _time=08/06/2023 07:150:00 dest=abc2 file_hash=45vv678 file_name=badguy.dll file_path=my_path vt_* - nothing will be populated event 3: _time=08/06/2023 07:30:00 dest=abc3 file_hash=45vv678 file_name=badguy.dll file_path=my_path vt_* - nothing will be populated I know the spl is fine as if I were to change the time picker to that of just the 2nd or 3rd event, all the vt_ fields would be populated.  It looks like this is the expected behavior.  Thanks in advance.
Welcome to the Community Member Spotlight series. In this edition, Aaron Schifman, a Cisco AppDynamics Product Marketing Manager and frequent Community poster, shares his perspective, tips, use cases... See more...
Welcome to the Community Member Spotlight series. In this edition, Aaron Schifman, a Cisco AppDynamics Product Marketing Manager and frequent Community poster, shares his perspective, tips, use cases, and more…  — Ryan Paredez & Claudia Landivar, Community Managers    In this post  Your work: a picture of 'a day in the life'  Working with AppD products  Keeping up with industry news  Life after-hours  Insights to share   Your work: could you give us a picture of 'a day in the life' ?  A typical day for me involves working towards helping customers see the value of our products and services. This can entail anything from creating blogs and videos to collaborating on newsletters and other forms of media, as well as organizing the strategies behind how to drive that awareness.   My focus is to ensure customers are abreast of how they can be successful with our products by driving marketing initiatives that resonate with them.  How did you get involved with this work?  Quite by happenstance, actually... I have been a systems engineer (among other things) for most of my 23-years career. While working for Dell EMC, I fell into the role of product management and marketing due to organizational changes. I simply adapted to the needs of that org and have not looked back since.   I much prefer marketing to pre-sales or solutions design work, because, in this role, I am able to touch a broader scope of personas and areas. I also get to work with a lot of very creative folks, coming together with similar objectives, much like sales has, with the result of benefitting customers. In the end, we all come together to support our communal mission.  What has fed your interest in your work?  I am not a musician or an artist, and the only creative outlet for me is my words. I relish taking complex concepts and breaking them down into something digestible. I am driven by helping others understand obscure concepts so they can make the decisions they need. I am very fortunate to be an educator and that is why I have an interest in this position.    Working with AppD products   Have you learned anything interesting about how different customers use AppDynamics to achieve their goals? When I first came on board a little over two years ago, I thought AppD was just a bunch of flow maps and performance indicators. What some customers have taught me is that they leverage AppD’s synthetic browser testing to gain proactive insights, which can potentially eliminate the performance issues customers could, or would, experience.  And that is the goal, right? Being proactive so as to not ever have to fight a customer experience issue. How our customers do that is always inspirational and gratifying.   Can you tell us about a positive experience you’ve had with the community?  I noticed a post from, let me call him "Mr. M", that inspired me to understand a particular use case of the product. Being new myself, Mr. M had a wealth of experience I imagined I could gain from speaking with him. I was new to AppD, and so the Community offered the best touch point for how our customers actually used and felt about our product. I reached out to Mr. M for some additional insights, and he responded right away, helping me understand the product I was hired to support. Imagine that, a new Cisco AppDynamics employee asking a long-time customer for help. It was an amazing experience and one I will never forget. It is the epitome of what a “community” stands for and why I believe deeply in supporting it.  What are your top AppDynamics hot tips?  Pay attention to how Health Rules are configured and tune them periodically. We do not live in a set it-and- forget- it world. Maintenance is key.  Keep abreast of the latest features and provide feedback to your account teams, and on Community. Stay engaged and be inspired to inspire!  These are just a couple of tips, but every month I try and post a tip with a short video found in the Share a tip forum. I really want to hear from other community members about what tips they have, so we can share them publicly there.   What self-help issues do you notice most frequently with customers? I find that a lot of help is often needed for managing agents and the instrumentation aspect. It is a very complicated area and one AppDynamics is addressing by providing simplified agent management.   With the advent of OpenTelemetry, as well, customers will need self-help options in a lot of new areas.   Keeping up with industry news  Surrounding myself with people who share similar interests. I have found the best way to stay afloat is to chat and follow on social media those people and companies I respect and that I am interested in. I do receive news articles from various sources, but find those more difficult to manage vs. reading insights or a blog recommendation from someone I respect in the evening or when there is a moment of downtime.  Life After Hours  How—or where—do you find inspiration?  I find inspiration all around...equally when I am alone as when I’m with others. Life is short so I try to keep my eyes, ears, and brain open to all possibilities. I love being inspired by others, and hope to inspire as well. It’s what makes us human and encourages us to grow.  Insights to share What advice would you give someone who is up and coming in your field of work?  Always remain curious and do not be afraid to make your mark. Step in where you can and offer your perspective. Do not shy away from being committed to your ideas, even if you do not think they are unique. Be accepting of the status quo knowing that inspiration takes time to cultivate.  
Hello, I have the following query that I am working with and it generates a table with multiple counts for various ports at 15 min intervals. index=abc source=xyz  SMF119HDSubType=2 | timechart sp... See more...
Hello, I have the following query that I am working with and it generates a table with multiple counts for various ports at 15 min intervals. index=abc source=xyz  SMF119HDSubType=2 | timechart span=15m count by SMF119AP_TTLPort_0001 usenull=f useother=f | stats values(*) as * by _time | table _time Port1 Port2 The result is the following table. I only want to display results more that 5000 counts. I am trying to use the where Port 2>5000 command. But it does not work. I am only displaying 2 port columns. However, I have several other ports to monitor as well. _time Port1 Port2 2023-08-09 09:30:00 800 2700 2023-08-09 09:45:00 1200 4800 2023-08-09 10:00:00 1300 5300 2023-08-09 10:15:00 600 8000 2023-08-09 10:30:00 400 13500   I would appreciate your inputs.   Thank you,   Chinmay.
Background Customer-configured Email-based alerting is a first-class workflow supported by Splunk. We know how vital alerting can be to our customers. To help ensure that you continue receiving a... See more...
Background Customer-configured Email-based alerting is a first-class workflow supported by Splunk. We know how vital alerting can be to our customers. To help ensure that you continue receiving any configured email-based alerts from your stacks, please take a moment to review a summary of changes being introduced. Splunk Cloud Platform is enhancing its outbound email delivery capabilities to provide a more robust, multi-region email delivery service with increased limits on the email payload size.  Summary of Changes (Splunk will notify our customers the changes prior to the rollout by cohort by email) Change Description Customer Impact Custom MAIL FROM Standard stacks: The MAIL FROM value in the SMTP envelope of emails originating from Splunk Cloud Platform and SOAR stacks will change from pm.mtasv.net to mail.splunkcloud.com. Emails will continue having the From field set to alerts@splunkcloud.com FedRAMP Moderate stacks: MAIL FROM value in the SMTP envelope of emails originating from Splunk Cloud stacks will change from pm.mtasv.net to mail.splunkcloudgc.com. Emails will continue having the From field set to alerts@splunkcloudgc.com No downtime expected.  If you need clarification on any existing network policies at your end, please contact Customer Support so we may work with you to help ensure that you continue receiving email-based alerts. Dynamic IP addresses for origin mail server The origin email server is expected to have a dynamic IP address range in the future compared to the current state of a well-known IP address range.    Enhancement Description Customer Impact Multi-region Support Email originating from customer stacks will now be routed via servers that are region-local to customer stack.  None. No action required. Email size Email size, including body/text/images/ attachments, will be increased from 10MB to 40MB. None. No action required.   Impacted stacks All Splunk Cloud stacks hosted on AWS and GCP All SOAR stacks FedRAMP Moderate stacks  Roadmap Initiatives We are also excited to let you in on some of our future roadmap initiatives following this change: Automating management of suppressed emails There have been cases where domains/email addresses have been reported as erroneously blocked resulting in undelivered emails for customers. With the eventual goal of self-service, we will start automating remediation workflows for email suppression use cases.  Number of recipients Customers are currently able to send an email notification for alerts to multiple customer-configured recipients not exceeding a count of 50 (Sum of the number of recipients in To, CC, BCC fields). We will allow this number to be adjusted.
Hey ya'll - I am attempting to create an efficient search to detect password compromises within some environments, the map command is very intensive and clicking to pass tokens is not automated, I wa... See more...
Hey ya'll - I am attempting to create an efficient search to detect password compromises within some environments, the map command is very intensive and clicking to pass tokens is not automated, I was wondering if there are other solutions. Below I have 2 macros, 1 which will detect failed logons based on the account_name field greater than 13. The second will pull all of the successful logins from a keyboard. Using a saved-search would also be acceptable. My goal is to have a result from these two macros that will provide the host name where the compromise occurred, the time when the initial password compromise occurred, and which user, if any, within 120 seconds successfully logged onto that same machine where the compromise occurred.  This seems to be a simple solution but cant seem to put two and two together. Appreciate the support!!! Macro 1 - password_compromise   index=wineventlog source="wineventlog:security" "logname=security" (EventCode=4625 Logon_Type=2) | eval newtime=_time+120 | eval oldtime=_time | eval Account_Name = mvfilter(Account_Name!="-" AND NOT match(Account_Name,"\$$")) | eval number=len(Account_Name) | eval status=case(number>13,"Potential Password Compromise", number<=9,"OK") | where status="Potential Password Compromise" | rename Account_Name as "Potential Password?" | sort -_time | table _time host "Potential Password?" oldtime newtime status | outputcsv passwordcompromise_initial.csv   Macro 2 - password_logon   index=wineventlog source="wineventlog:security" "logname=security" (EventCode=4624 LogonType=2) | eval AccountName = mvfilter(NOT match(AccountName, "-") AND NOT match(AccountName, "DWM*") AND NOT match(AccountName,"\$$$$")) | eval logontime=time | dedup host, AccountName, logontime | where AccountName!="" | rename AccountName as "AuthenticatedUser" | table time host "AuthenticatedUser" logontime | outputcsv password_successfullogon.csv    
Hi, I wrote a report that merge the result with lookup table to add fields (like machineName). the lookup table contain the field,source. then, I do sistats as the following: index=....search quer... See more...
Hi, I wrote a report that merge the result with lookup table to add fields (like machineName). the lookup table contain the field,source. then, I do sistats as the following: index=....search query...  | lookup lk_table_name.csv source AS source | sistats values(*) as * by TimeStamp,source if I write sistats command after the lookup command the new fields from the lookup table disappear.  if i write the sistats before the lookup command everything is ok but then i have other problem when i try to parse the summary index: index=summary search_name="query_SummaryIndex_Main" | stats values(*) as * by TimeStamp,source what should i do? why sistats doesnt work after lookup? thanks, Maayan
While browsing the documentation for the "Splunk Add-on for Microsoft Office 365", I see a statement: "The MessageTrace API for the Splunk Add-on for Microsoft Office 365 does not support data coll... See more...
While browsing the documentation for the "Splunk Add-on for Microsoft Office 365", I see a statement: "The MessageTrace API for the Splunk Add-on for Microsoft Office 365 does not support data collection for USGovGCC and USGovGCCHigh endpoints." (https://docs.splunk.com/Documentation/AddOns/released/MSO365/Configureinputmessagetrace) Is this a limitation of the Splunk add-on, or from Microsoft? Being USGovGCC, it would certainly be nice to have message traces...  
Hi,  can anyone tell me how to replace new client secret value which i received. i am new to splunk and badly need the help here!
Greetings, We started seeing OPSNSSL vulnerabilities on all of our Splunk forwarders and the main engine this week. The advisory tells us we must use OPENSSL 3.0.8 or newer. Since OPENSSL is now on... See more...
Greetings, We started seeing OPSNSSL vulnerabilities on all of our Splunk forwarders and the main engine this week. The advisory tells us we must use OPENSSL 3.0.8 or newer. Since OPENSSL is now on 3.1.2, I really thought the latest Splunk updates would fix the problem. I have just updated all forwarders to 9.1.0.1 and the main engine to 9.1.0.2, and it is now showing OPENSSL at 3.0.7. When will Splunk issue an update to address this and get OPENSSL to at least 3.0.8? 
I have a lookup test_lookup with 2 fields a1 and b1. The field a1 is common with the fields in the raw data. the values of field a1 and b1 are as follows: a1   a2  a       1    a        2 b  ... See more...
I have a lookup test_lookup with 2 fields a1 and b1. The field a1 is common with the fields in the raw data. the values of field a1 and b1 are as follows: a1   a2  a       1    a        2 b        3 b        4 What would be the o/p of the command ....| lookup test_lookup a1 OUTPUT a2?
Dear Team, Am setting up the machine agent and trying to send data to server, but am getting this error Caused by: org.apache.http.conn.ConnectTimeoutException  (Windows Machine agent) Caused by: ... See more...
Dear Team, Am setting up the machine agent and trying to send data to server, but am getting this error Caused by: org.apache.http.conn.ConnectTimeoutException  (Windows Machine agent) Caused by: org.apache.http.conn.ConnectTimeoutException: Connect to *****.appdynamics.com:443 [****-dev.saas.appdynamics.com/***.139, *****-saas.appdynamics.com/18.***.***.23, ****-dev.saas.appdynamics.com/5*.2*.**.***] failed: connect timed out Things we tried and checked Enable debug mode and captured the log Updated proxy in conf with port and url Url can be reached via browser and curl command Imported certs in security lib of machine agent Note : Via broswer i can reach the server Pls throw some light  Thanks in advance Sathish
Hi Team, I was trying to find out the workstations clock out of sync logs in splunk by using the below query. but I can not getting the expected logs.  Also, could not able to see the previous time ... See more...
Hi Team, I was trying to find out the workstations clock out of sync logs in splunk by using the below query. but I can not getting the expected logs.  Also, could not able to see the previous time field in interesting fields but it was existing in raw message field. can someone help me out how to make this query efficient...?  index=*_win sourcetype=wineventlog EventCode=4616 category="Security State Change" | stats max(Previous Time) by asset_id | where isnull(lastTime) | addinfo | eval hourDiff=floor((info_max_time-info_min_time)/3600) | fields dest,should_timesync,hourDiff Thanks in advance Previous Time: ‎2023‎-‎08‎-‎09T09:18:00.490316500Z