All Topics

Top

All Topics

<panel id="global" rejects="$hideglo$"> <input type="dropdown" id="orgselect" token="org" searchWhenChanged="false"> <label>Organization</label> <showClearButton>false</showClearButton> <search> ... See more...
<panel id="global" rejects="$hideglo$"> <input type="dropdown" id="orgselect" token="org" searchWhenChanged="false"> <label>Organization</label> <showClearButton>false</showClearButton> <search> <query>| `orgList`</query> <earliest>0</earliest> <latest>now</latest> </search> <fieldForLabel>cust_name</fieldForLabel> <fieldForValue>cust_name</fieldForValue> <prefix>em7_cust_name="</prefix> <suffix>" em7_cust_name!=Cisco </suffix> </input> <input type="dropdown" id="region" token="region" searchWhenChanged="false"> <label>Region</label> <showClearButton>false</showClearButton> <selectFirstChoice>true</selectFirstChoice> <search> <query>|inputlookup cert_groups_lookup | lookup cert_servers_lookup group_id OUTPUTNEW em7_org_id | mvexpand em7_org_id | dedup em7_org_id,group_id | search em7_org_id="$cust_id$" | sort 0 group_name</query> <earliest>0</earliest> <latest>now</latest> </search>
Hi, I have logger statements like below: Event data - {"firstName":"John","lastName":"Doe"}   My query needs <rex-statement> where double quotes (") in the logs are parsed and the two fields are ... See more...
Hi, I have logger statements like below: Event data - {"firstName":"John","lastName":"Doe"}   My query needs <rex-statement> where double quotes (") in the logs are parsed and the two fields are extracted in a table: index=my-index "Event data -" | rex <rex-statement> | fields firstName, lastName | table firstName, lastName    Please let me know what <rex-statement> do I have to put. Thanks in advance.  
So I'm new to the splunk on GCP still learning, one thing I'm trying to wrap my head around is this: GCP pubsub provides native support for HTTP push - it's pretty straightforward. Now Splunk GCP ha... See more...
So I'm new to the splunk on GCP still learning, one thing I'm trying to wrap my head around is this: GCP pubsub provides native support for HTTP push - it's pretty straightforward. Now Splunk GCP has the dataflow template which seems to be a data pipeline that just re-format the logs and push it through the Splunk HEC which is HTTP endpoint. From architectural pov,  introducing  dataflow template into the GDI is an extra layer when the log export seemingly can be done by pubsub http push, so what is the specific value add from dataflow template?
Hello, I have some issues to perform multi-line field extraction for XML, my in-line extraction is not getting any result; sample events and my in-line extraction are provided below. Any help would ... See more...
Hello, I have some issues to perform multi-line field extraction for XML, my in-line extraction is not getting any result; sample events and my in-line extraction are provided below. Any help would be appreciated.  Sample Events: <Event> <ID>0123011</ID> <Time>2023-10-28T05:22:37.97011</Time> <Application_Name>Test</Application_Name> <Host_Name>VS0SMADBEFT</Host_Name> </Event> <Event> <ID>01232113</ID> <Time>2023-10-28T05:22:37.99011</Time> <Application_Name>Test</Application_Name> <Host_Name>VS0SMADBEFT</Host_Name> </Event>   In Line Extraction I Used <ID>(?<ID>[^<]+)<\/ID>([\r\n]*)<Time>(?<Time>[^<]+)</Time>([\r\n]*)<Application_Name>(?<Application_Name>[^<]+)</Application_Name>([\r\n]*)<Host_Name>(?<Host_Name>[^<]+)</Host_Name>    
Hello all! This will be a doozy, so get ready. We are running a search with tstats generated results,  from various troubleshooting we simplified it to the following     | tstats count by host | ... See more...
Hello all! This will be a doozy, so get ready. We are running a search with tstats generated results,  from various troubleshooting we simplified it to the following     | tstats count by host | rename host as hostname | outputlookup some_kvstore     The config of the kvstore is as follows:     # collections.conf [some_kvstore] field.hostname = string         # transforms.conf [some_kvstore] collection = some_kvstore external_type = kvstore fields_list = hostname     When you run the first 2 lines of the SPL, you will get quite a few results, as it queries the internal db for hosts and retrieves a count of their logs. After you add the outputlookup command, it removes all your results and will not add them to the kvstore.  As my coworker found, there is a way to write the results to the kvstore after all, however the SPL for that is quite cursed, as it involves joining the original search back in, but the new results will be written to the kvstore.     | tstats count by host | rename host as hostname | table hostname | join hostname [ tstats count by host | rename host as hostname] | outputlookup some_kvstore       As far as I aware, 9.1.2, 9.0.6, and latest verisions of cloud have this issue even as fresh installs of Splunk, however it does work on an 8.2.1 and 7.3.3 systems (dont ask). The Splunk user owns everything in the Splunkdir so there is no problem with writing to any files, the kvstore permissions are global, and any user can read or write to it. So after several hours of troubleshooting, we are stumped here and not sure where we should look next. Changing to a csv is unfortunately not an option.   Things we have tried so far, that i can remember: Completely fresh installs of Splunk Cleaning the kvstore via `splunk clean kvstore -local` Outputting to a csv (works) Using makeresults to create the fields manually and add to the kvstore (works) Using the noop command to disable all search optimization  Writing to the kvstore via API (works) Reading data from the kvstore via inputlookup (works) Modifying an entry in the kvstore via the lookup editor app (works) Testing with all search modes (fast, smart, verbose)
I am trying to remove window EventCodes 4688 and 4627. Nothing I have tried has worked. Her are the things that I have tried. This is on the inputs.conf. blacklist = EventCode="4688" Message="(... See more...
I am trying to remove window EventCodes 4688 and 4627. Nothing I have tried has worked. Her are the things that I have tried. This is on the inputs.conf. blacklist = EventCode="4688" Message="(?:New Process Name:).+(?:SplunkUniversalForwarder\bin\splunk.exe)|.+(?:SplunkUniversalForwarder\bin\splunkd.exe)|.+(?:SplunkUniversalForwarder\bin\btool.exe)|.+(?:Splunk\bin\splunk.exe)|.+(?:Splunk\bin\splunkd.exe)|.+(?:Splunk\bin\btool.exe)|.+(?:Agent\MonitoringHost.exe)" blacklist1= EventCode="4688" blacklist2= EventCode="4627" blacklist= EventCode=4627,4688 blacklist = EventCode=4627|4688 blacklist= EventCode=%^(4627|4688)$% blacklist= EventCode=%^4627$% blacklist= EventCode=%^4688$%
We’ve been shouting it from the rooftops! The findings from the 2023 Splunk Career Impact Report showing that proficiency in using Splunk offers a competitive edge for users and customers. Have you r... See more...
We’ve been shouting it from the rooftops! The findings from the 2023 Splunk Career Impact Report showing that proficiency in using Splunk offers a competitive edge for users and customers. Have you read it? | A picture paints a thousand words, but an infographic validates it with facts and figures.  If you didn’t have time to read through the full report, we got you! Scan through the stats and metrics behind the survey in the 2023 Career Impact Survey Infographic. (We’ve got a sneak peek below)     Thanks for being part of our community and helping us show the world what proficiency in Splunk can offer both enterprises and their employees!    -- Callie Skokos on behalf of the Splunk Education Crew
I have some search before, and after I extract fields (name, status) from json and mvzip it together, I got this table   _time name status nameStatus 2023-12-06 16:06:20 A B C UP D... See more...
I have some search before, and after I extract fields (name, status) from json and mvzip it together, I got this table   _time name status nameStatus 2023-12-06 16:06:20 A B C UP DOWN UP A,UP B,DOWN C,UP 2023-12-06 16:03:20 A B C UP UP UP A,UP B,UP C,UP 2023-12-06 16:00:20 A B C DOWN  UP UP A,DOWN B,UP C,UP   I want to get only the latest time of the records, so I pipe in the command  ...|stats latest(nameStatus). However, the result comes out only as A,UP   How can I fix this? Thank you!
I'm going crazy trying to troubleshoot this error with eventlog. I'm only using one mvfile replacement type and it is not working. The SA-Eventgen logs tell me this:       time="2023-12-06T19:42:... See more...
I'm going crazy trying to troubleshoot this error with eventlog. I'm only using one mvfile replacement type and it is not working. The SA-Eventgen logs tell me this:       time="2023-12-06T19:42:32Z" level=warning msg="No srcField provided for mvfile replacement: "         In my $SPLUNK_HOME/etc/apps/<app>/default/eventgen.conf file, I have:       ... token.2.token = "(\$customer_name\$)" token.2.replacementType = mvfile token.2.replacement = $SPLUNK_HOME/etc/apps/eventgen_yogaStudio/samples/customer_info.txt:1 ...         My customer_info.txt:1 file contains:       JoeSmith,43,Wisconsin,Pisces JaneDoe,25,Kentucky,Gemini ...         I'm getting JSON-formatted events but for customer_name, it's just blank:       { membership: gold customer_name: item: 30-day-pass quantity: 4 ts: 1701892130 }         I've tried the following sample file names: customer_info.txt customer_info.sample customer_info.csv Nothing seems to work. I'm going crazy!
Hi all, I published my new version of app : https://splunkbase.splunk.com/app/7087, version 1.2.0 (invisible for now because below issue) When I tried to install it on my cloud instance through spl... See more...
Hi all, I published my new version of app : https://splunkbase.splunk.com/app/7087, version 1.2.0 (invisible for now because below issue) When I tried to install it on my cloud instance through splunkbase, I face below errors X509 certificate (CN=splunkbase.splunk.com,O=Splunk Inc.,L=San Francisco,ST=California,C=US) common name (splunkbase.splunk.com) did not match any allowed names (apps.splunk.com,cdn.apps.splunk.com) That's werid because I did not change anything about certification or the package process... Just fixed one more bug in the app about data missing and bump the app version. Tried other apps on Splunkbase and the old version of my app, they are all works fine... Anyone has idea what happened to my 1.2.0 app? Your help will be appreciated very much!
Hi, I have a problem excluding or including only entries that contain specific String values in the msg field. For example, there are two (maybe more) definite String values contained in the msg fie... See more...
Hi, I have a problem excluding or including only entries that contain specific String values in the msg field. For example, there are two (maybe more) definite String values contained in the msg field: 1. "GET /ecc/v1/content/preLoginBanners HTTP/1.0" 2. "GET /ecc/v1/content/category/LegalTerms HTTP/1.0" I need 3 statements like the following: 1. Include ONLY 1 above in the msg field. 2. Include ONLY 2 above in the msg field. 3. Exclude 1 and 2 above to determine if there are more unknown values in the msg field.  I imagine I will be using thistype of  logic more on other output fields as time goes on. I am new to this and I am using the XML-based AdHoc Search input/output form. Any help is greatly appreciated!  
Hello, I am trying to find a command that will allow me to create a table and only display values. when using the user agent field in my table, there are some values that are null. I only want value... See more...
Hello, I am trying to find a command that will allow me to create a table and only display values. when using the user agent field in my table, there are some values that are null. I only want values to display. 
Use automation capabilities to determine whether purchase abandonment occurs more with specific users, devices, or geographies  Video Length: 2 min 18 seconds  CONTENTS | Introduction | Video... See more...
Use automation capabilities to determine whether purchase abandonment occurs more with specific users, devices, or geographies  Video Length: 2 min 18 seconds  CONTENTS | Introduction | Video | Transcript | Resources | About the presenter  When it comes to e-commerce cart abandonment, it's important to be able to quickly find patterns in whether those abandoning share common characteristics—such as device type or geography—that could inform us of technical issues affecting the digital experience of a subset of customers.   Cisco AppDynamics Experience Journey Map makes this task easy with a Sankey visualization diagram showing at-a-glance which site journeys have the most traffic.       Video Transcript Spoiler (Highlight to read) 00:00:00 - 00:00:38  When it comes to e-commerce cart abandonment, it's important to be able to quickly find patterns in whether those abandoning share any common characteristics—such as device type or geography—that could inform us whether there are technical issues affecting the digital experience for a subset of customers. The Experienced Journey Map in AppDynamics makes this task easy.   The Experience Journey Map uses a Sankey diagram to visually indicate at-a-glance which journeys through the site have the most traffic.  00:00:39 – 00:01:01  Each step in a journey denotes the drop-off rate at that step, so we can understand immediately where most abandonment is occurring in each journey. Having identified that most abandonment is occurring during the checkout step, we click through the drop off rate and are automatically shown additional context to help easily and quickly determine if abandonment is disproportionately occurring for any specific set of users, device types, geographies and so on.     00:01:01 - 00:01:22  The session data for abandoning users is automatically aggregated and sorted in descending order, readily surfacing any patterns. We can easily see that users with iPhone 12 devices abandoned nearly twice as often compared to users visiting with any other device model. Further, we can also see that a higher proportion of customers in the U.S. are abandoning compared to other geographies.     00:01:23 - 00:01:49  We can review more detailed per customer insights by selecting the corresponding radio buttons to define the specific set of users we are most interested in and clicking analyze: AppDynamics loads of full session data that was captured for iPhone 12 users located in the U.S. We can use AppDynamics advanced analytics capabilities to further refine the set of abandoning customers, for instance by selecting a specific region within the U.S. to deeply understand how the customers of interest were interacting with the mobile app at each step in their journey.     00:01:50 – 00:02:18  This user experience data can also be correlated with the backend performance data for an end-to-end view of the user experience, from browser or mobile device through the network and into the backend services responsible for fulfilling the request. This allows ITPs to quickly rule out performance issues as being a factor in abandonment.  00:00:00 - 00:00:38  When it comes to e-commerce cart abandonment, it's important to be able to quickly find patterns in whether those abandoning share any common characteristics—such as device type or geography—that could inform us whether there are technical issues affecting the digital experience for a subset of customers. The Experienced Journey Map in AppDynamics makes this task easy.   The Experience Journey Map uses a Sankey diagram to visually indicate at-a-glance which journeys through the site have the most traffic.    00:00:39 – 00:01:01  Each step in a journey denotes the drop-off rate at that step, so we can understand immediately where most abandonment is occurring in each journey. Having identified that most abandonment is occurring during the checkout step, we click through the drop off rate and are automatically shown additional context to help easily and quickly determine if abandonment is disproportionately occurring for any specific set of users, device types, geographies and so on.     00:01:01 - 00:01:22  The session data for abandoning users is automatically aggregated and sorted in descending order, readily surfacing any patterns. We can easily see that users with iPhone 12 devices abandoned nearly twice as often compared to users visiting with any other device model. Further, we can also see that a higher proportion of customers in the U.S. are abandoning compared to other geographies.     00:01:23 - 00:01:49  We can review more detailed per customer insights by selecting the corresponding radio buttons to define the specific set of users we are most interested in and clicking analyze: AppDynamics loads of full session data that was captured for iPhone 12 users located in the U.S. We can use AppDynamics advanced analytics capabilities to further refine the set of abandoning customers, for instance by selecting a specific region within the U.S. to deeply understand how the customers of interest were interacting with the mobile app at each step in their journey.     00:01:50 – 00:02:18  This user experience data can also be correlated with the backend performance data for an end-to-end view of the user experience, from browser or mobile device through the network and into the backend services responsible for fulfilling the request. This allows ITPs to quickly rule out performance issues as being a factor in abandonment.  Additional Resources  Learn more about the Experience Journey Map in the documentation:  Experience Journey Map Overview  Analyze Traffic Segments  About presenter Adam Smye-Rumsby Adam Smye-Rumsby, Cisco AppDynamics Senior Sales Engineer Adam J. Smye-Rumsby joined AppDynamics as a Senior Sales Engineer in 2018, after nearly 16 years with IBM across a variety of roles — including 5+ years as a Senior Sales Engineer in the Digital Experience and Collaboration business unit. Since then, he has helped dozens of enterprise and commercial customers improve the maturity of their application monitoring practices.    More recently, Adam has taken on the challenge of developing subject-matter expertise in the application security market.  He has contributed to two published books on the use of Java technology, and holds patents in AI/ML, Collab, VR and other technology areas. Reach out to Adam to learn more about how AppDynamics is helping Cisco customers secure their applications in an ever-changing threat landscape. 
Hi, I have seen a aggregration issue for one of my source type cisco, how can I fix this issue  in my splunk cloud ? 12-06-2023 17:42:27.004 +0000 ERROR AggregatorMiningProcessor [82698 merging_0... See more...
Hi, I have seen a aggregration issue for one of my source type cisco, how can I fix this issue  in my splunk cloud ? 12-06-2023 17:42:27.004 +0000 ERROR AggregatorMiningProcessor [82698 merging_0] - Uncaught exception in Aggregator, skipping an event: Can't open DateParser XML configuration file "/opt/splunk/etc/peer-apps/Splunk_TA_cisco-ise/default/datetime_udp.xml": No such file or directory - data_source="/syslog/nac/ise.log", data_host="ise-xx", data_sourcetype="cisco:ise:syslog" Thanks...  
Is there any mechanism to monitor a salesforce URL beyond single sign on. We try to setup using Splunk website monitoring. this app directly monitoring single sign on page and not actual page. Please... See more...
Is there any mechanism to monitor a salesforce URL beyond single sign on. We try to setup using Splunk website monitoring. this app directly monitoring single sign on page and not actual page. Please suggest a method to monitor an URL beyond single sign on.  Thanks.           
For example: If "fieldX" has many possible values(ex. 1 2 3 4 a b c d ...) we want to have Splunk send an alert email whenever any of these values are seen more than 10 times in 60mins.   Does anyo... See more...
For example: If "fieldX" has many possible values(ex. 1 2 3 4 a b c d ...) we want to have Splunk send an alert email whenever any of these values are seen more than 10 times in 60mins.   Does anyone know a search that will work for this? Thanks in advance!
Do you need to return output from one section of a chain search to another, like when writing a function in a programming language I've assumed that a chained search would, as a user, act in a simil... See more...
Do you need to return output from one section of a chain search to another, like when writing a function in a programming language I've assumed that a chained search would, as a user, act in a similar fashion to concatenating both searches, but with a really DRY efficiency - so superb use for dashboarding as often the material being presented shared a common subject. There are certain queries I am running that break when used in a chained order - am I missing some kind of return function needed?
Hello all, Can someone help me with where I can download the Splunk Tools 6.3 package for linux?
So when an upstream error is logged in our splunk it has two fields that contain all the information about the error. So I created a nice little query to show a simple table of the two fields: stats... See more...
So when an upstream error is logged in our splunk it has two fields that contain all the information about the error. So I created a nice little query to show a simple table of the two fields: stats values(errorMessage) by errorCode However for one of the error messages in the errorMessage field it can contain an id for the current transaction with the server. So when we scale up and release this table will contain hundreds of values for a single error type. Examples of the types of errors (obviously sanitized without actual data): errorCode: Not Required, errorMessage: [Error: Not Required] 400: Downgrade for transactionId=00000000000: type=01 country=GB errorCode: Not Required, errorMessage: [Error: Not Required] 400: Downgrade for transactionId=00000000001: type=01 country=GB errorCode: Invalid Request Parameters, errorMessage: [Error: Invalid Request Parameters] 400: Value of 30 for field not valid errorCode: undefined, errorMessage: [Error: undefined] 400: undefined errorCode: undefined, errorMessage: [Error: undefined] 500: undefined So I would like the values(errorMessage) to group the first two items as a single entry so if I could create a new variable without the transactionId or replacing it with the same value, the information would be much easier to read and present for error triage in our dashboard because the transaction id is not important for seeing an error trend. Not super great with Regex but I feel there is something that would work to just find a field of numbers with a specific length and remove them or replace them. Is that possible? Thanks
Hello, The rex command to catch and group the Accesses multi values are not working even though the results in regex101 are fine. Could you guys tell me what I am missing? Test Log:   12/12/2012 ... See more...
Hello, The rex command to catch and group the Accesses multi values are not working even though the results in regex101 are fine. Could you guys tell me what I am missing? Test Log:   12/12/2012 04:25:13 PM LogName=Security EventCode=5145 EventType=0 ComputerName=test.corp SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=2049592111 Keywords=Audit Success TaskCategory=Detailed File Share OpCode=Info Message=A network share object was checked to see whether client can be granted desired access. Subject: Security ID: User\Test Account Name: Test Account Domain: Test Logon ID: 0x117974CE Network Information: Object Type: File Source Address: ::1 Source Port: 51234 Share Information: Share Name: \\*\C$ Share Path: \??\C:\ Relative Target Name: Users\Test\Desktop Access Request Information: Access Mask: 0x100081 Accesses: SYNCHRONIZE ReadData (or ListDirectory) ReadAttributes Access Check Results: -     Splunk Rex Query:   ... | rex field=Body ".*Access Mask.*\sAccesses:\s(?<Accesses2>.+?)Access\sCheck Results\:.*"     Thanks, Regards,