All Topics

Top

All Topics

I have lookup file bad_domain.csv baddomain.com baddomain2.com baddomain3.com   Then i want to search from proxy log, who people connect to bad domains in my lookup list. But inc... See more...
I have lookup file bad_domain.csv baddomain.com baddomain2.com baddomain3.com   Then i want to search from proxy log, who people connect to bad domains in my lookup list. But include subdomains. Example: subdo1.baddomain.com subdo2.baddomain.com subdo1.baddomain2.com Please help, how to create that condition in spl query?
Is it possible to take splunk Admin certification after Splunk power user certification expired?
Did someone ever faced or implementing this on Splunk ES?. Im facing an issue when try add TAXII feed from OTX API connection, i already check the connectivity, and made some changes on the config... See more...
Did someone ever faced or implementing this on Splunk ES?. Im facing an issue when try add TAXII feed from OTX API connection, i already check the connectivity, and made some changes on the configuration until disable the prefered captain on my search head, but it still not resolved. I also know there is an app for this, but just want to clarify are this option still supported or not. Here my POST argument URL: https://otx.alienvault.com/taxii/discovery POST Argument: collection="user_otx" taxii_username="API key" taxii_password="foo" But the download status keep on TAXII feed pooling starting, and when i check on the PID information  status="This modular input does not execute on search head cluster member" msg="will_execute"="false" config="SHC" msg="Deselected based on SHC primary selection algorithm" primary_host="None" use_alpha="None" exclude_primary="None"  
Hi! Is it possible to integrate the app to multiple servicenow instances? If yes, how to "choose" the one you want to create the incident to? For example if I am using the: | snowincidentstream OR... See more...
Hi! Is it possible to integrate the app to multiple servicenow instances? If yes, how to "choose" the one you want to create the incident to? For example if I am using the: | snowincidentstream OR | snowincidentalert  
I have around 60 standalone windows laptops that are not networked. I looking to install a UF to capture the windows logs and have them stored on the local drive "c:\logs" The logs will then be tra... See more...
I have around 60 standalone windows laptops that are not networked. I looking to install a UF to capture the windows logs and have them stored on the local drive "c:\logs" The logs will then be transfered to a USB for archiving and indexed into splunk for NIST800 compliance eg login success/failure. I am struggling to find the correct syntax for the UF to save locally as it asks for a host and Port.   josh
There is no Pattern or punctuation so running Regex might not work in this situation since I cant know what kind of Error or pattern will appear in the final line/sentence in the field. the last sen... See more...
There is no Pattern or punctuation so running Regex might not work in this situation since I cant know what kind of Error or pattern will appear in the final line/sentence in the field. the last sentence can be anything and unpredictable so just wanted to see if there is a way to grab the last line of log that is in the field. This example most likely wont help but paints a picture that I just want the last line. index=example |search "House*" |table Message log looks similar like this: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example /local/line499 D://example ......a bunch of sensative information D://example /crab/lin650 D://example ......a bunch of sensative information D://user/local/line500 Next example: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : someone stepped on the wire. Next example: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://user/local/line980 ,indo Next example: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : Simon said Look Goal: D://user/local/line500 Error : someone stepped on the wire. D://user/local/line980 ,indo Error : Simon said Look  I hope this makes sense....
The below log entry includes different format within it. Not sure how to write props.conf for proper field extractions and line breaking. each log entry has text, delimitter(|) and json. 2024-03-11T... See more...
The below log entry includes different format within it. Not sure how to write props.conf for proper field extractions and line breaking. each log entry has text, delimitter(|) and json. 2024-03-11T20:58:12.605Z [INFO] SessionManager sgrp:System_default swn:99999 sreq:1234567 | {"abrMode":"NA","abrProto":"HLS","event":"Create","sUrlMap":"","sc":{"Host":"x.x.x.x","OriginMedia":"HLS","URL":"/x.x.x.x/vod/Test-XXXX/XXXXX.smil/transmux/XXXXX"},"sm":{"ActiveReqs":0,"ActiveSecs":0,"AliveSecs":360,"MediaSecs":0,"SpanReqs":0,"SpanSecs":0},"swnId":"XXXXXXXX","wflow":"System_default"} 2024-03-11T20:58:12.611Z [INFO] SessionManager sgrp:System_default swn:99999 sreq:1234567 | {"abrMode":"NA","abrProto":"HLS","event":"Cache","sUrlMap":"","sc":{"Host":"x.x.x.x","OriginMedia":"HLS","URL":"/x.x.x.x/vod/Test-XXXXXX/XXXXXX.smil/transmux/XXX"},"sm":{"ActiveReqs":0,"ActiveSecs":0,"AliveSecs":0,"MediaSecs":0,"SpanReqs":0,"SpanSecs":0},"swnId":"XXXXXXXXXXXXX","wflow":"System_default"}
It is important to have visibility into how users interact with your brand through websites or mobile apps to find opportunities for increased user retention, brand loyalty, and business growth.. Dig... See more...
It is important to have visibility into how users interact with your brand through websites or mobile apps to find opportunities for increased user retention, brand loyalty, and business growth.. Digital experience monitoring gives you the power to measure every touch point a user has with website or mobile app. Why Digital Experience Monitoring (DEM) is important and why you should care? Digital Experience Monitoring (DEM) is essential for understanding and optimizing how users interact with your digital platforms and is a key part of any modern observability strategy. With real time metrics track, DEM provides insights into performance, usability, and overall user satisfaction, enabling you to identify and resolve issues in your apps faster. Splunk Observability Digital Experience Monitoring Components Highlight Performance Monitoring: Tracks load times, page and route delays, uptime and error rates to ensure fast and reliable performance. Useful to catch errors, or understand if any particular browser or location is involved. User Behavior Analytics: Analyzes user interactions to understand how users navigate and engage with the your website or mobile web app. In this example, our integration automatically captures Core Web Vitals Metrics. To learn more about core web vitals go here: https://developers.google.com/search/docs/appearance/core-web-vitals Synthetic Monitoring: Simulates user interactions to identify potential issues before they affect real users. Splunk Observability offers a full suite of synthetic tests, you can import tests from your chrome browser or write scripts to simulate user navigation. It is possible to create detectors and alarm on important KPIs related to your specific test. Real User Monitoring (RUM): Collects data from actual user sessions to provide insights into real-world performance and user experience. The trace view allows you to explore client and server traces, and advanced filtering makes it easy to filter by any dimension you need. Digital experience analytics: allows customers to quantify user happiness and turn those User experience insights into tangible business outcomes. Cisco's Digital Experience Monitoring solutions combined portfolio provide comprehensive tools for monitoring and optimizing digital experiences. In this example we can see how the platform automatically matches traces from the browser with APM providing easy navigation and access end to end tracing information. Automatically capture every user interaction and correlate data with session replay, giving you quick access to Metrics, Logs, traces and session replay. Does your company have a digital experience monitoring strategy? Reach out, make a comment here and we will try to guide on how Cisco can help. Learn more: Explore Splunk workshops https://splunk.github.io/observability-workshop/latest/en/index.html Explore Splunk Digital Experience Docs https://docs.splunk.com/observability/en/rum/intro-to-rum.html Request: Free Trial https://www.splunk.com/en_us/products/observability.html
Hello, need help for auto multi select of the input values... So I have a Index values like data1, data2, data3. If I select data1 the sourcetype related to data1 should be auto selected, if i ... See more...
Hello, need help for auto multi select of the input values... So I have a Index values like data1, data2, data3. If I select data1 the sourcetype related to data1 should be auto selected, if i multislect data1 & data2 in the index it has to auto select in multi sourcetype
I have an alert that can clear in the same minute that it originally fired.  When the correlation search runs, both events are in it, the alert and the clearing alert.  The correlation search creates... See more...
I have an alert that can clear in the same minute that it originally fired.  When the correlation search runs, both events are in it, the alert and the clearing alert.  The correlation search creates notable events for each but uses the current time for the _time for the notable events and not the _time from the original alerts.  Since both alerts are converted into notable events during the same correlation search run, they get the exact same timestamp.  This causes ITSI to not definitely know the correct order of the events and it sometimes thinks the Normal/Clear event came BEFORE the original alert. This seems odd to me.  I would have imagined that ITSI would use the original event time as the _time for the notable event but it doesn't. Any ideas on how to address?   
.NET Core Application Workflow for Agent Business Transaction Detection Contents Who would use this workflow? How to check and adjust? Resolution Who would use this workflow? If yo... See more...
.NET Core Application Workflow for Agent Business Transaction Detection Contents Who would use this workflow? How to check and adjust? Resolution Who would use this workflow? If you have a .NET core application and the .NET agent is not detecting any Business Transaction even though the application is under load you may need to validate the aspdotnet-core-instrumentation node property.  The .NET agent, by default, assumes that the application is using the default routing mechanism for its .NET core version. However, this is not always the case, and, in some instances, this can prevent the agent from registering Business Transactions.   How to check and adjust?   The simplest way to check would be to review the AgentLog file at the below locations based on the underlying OS  For Windows, the default location is %programdata%\appdynamics\DotNetAgent\Logs For Linux, the default location is tmp/appd/dotnet For Azure Site Extension, the default location is %home%\LogFiles\AppDynamics In the AgentLog file there will be a startup log entry that will list the .NET core version in use as well as the inspected object:  INFO [com.appdynamics.tm.AspDotNetCoreInterceptorFactory] AspNetCore version: 3.1.32.0 (used object type Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.Internal.SocketConnection via Microsoft.AspNetCore.Http.Features.IFeatureCollection [Microsoft.AspNetCore.Http.Features.IFeatureCollection])  Next, the agent will write what the determined routing is:  INFO [com.appdynamics.tm.AspDotNetCoreInterceptorFactory] Determined ASP.NET Core routing being used is Mvc  However, the agent detected the routing as MVC which was deprecated in .NET Core 3, and this is not a viable option.   Refer to this AppDynamics Documentation  Resolution To resolve this issue, we will need to add a node property called aspdotnet-core-instrumentation at the tier level and apply it to all nodes under the tier.  Since our application is using 3.1, we have the following options:  ResourceInvoker Endpoint  HTTP  The different values have their advantages and disadvantages listed here: AppDynamics Docs If the application routing middleware is heavily customized, HTTP might be the only viable option to ensure the required Business Transactions/entry points are captured. 
Hello.  I have a data source that is "mostly" json formatted, except it uses single quotes instead of double, therefore, splunk is not honoring it if I set the sourcetype to json. If I run a query ... See more...
Hello.  I have a data source that is "mostly" json formatted, except it uses single quotes instead of double, therefore, splunk is not honoring it if I set the sourcetype to json. If I run a query against it using this: sourcetype="test" | rex field=_raw mode=sed "s/'/\"/g" | spath it works fine, and all fields are extracted. How can I configure props and transforms to perform this change at index time so that my users don't need to have the additional search parameters and all the fields are extracted by default, short of manually extracting each field? Example event, no nested fields: {'date': '2024-02-10', 'time': '18:59:27', 'field1': 'foo', 'field2': 'bar'}
Did you know there is a huge selection of videos on the AppDynamics YouTube channel? That’s right, you can learn about What’s New with Cisco AppDynamics, see the latest how-tos, get inspired with cus... See more...
Did you know there is a huge selection of videos on the AppDynamics YouTube channel? That’s right, you can learn about What’s New with Cisco AppDynamics, see the latest how-tos, get inspired with customer success stores, and more!  Let’s break down some of the great things you can check out...  First, we have our latest innovative video showcased on the homepage, giving you a well-articulated intro to the latest and greatest.    Our featured video highlights Splunk Log Observer Connect for Cisco AppDynamics. This simple integration between Cisco AppDynamics and Splunk provides quick access to the most relevant logs for troubleshooting application performance issues, right from the seat of your favorite full-stack observability tool, Cisco AppDynamics!    Application Performance Monitoring (APM) has never inspired me so much. Find out why by watching the video!   Previously, we showcased Smart Agent for Cisco AppDynamics, the modern agent management solution for the discerning enterprise ops team, simplifies application instrumentation through intelligent agent automation and lifecycle management — saving you time, accessing new capabilities and gaining business context.    Where else but through social can you get so inspired, so quickly?  But wait, there’s more...  Meet some of AppD’s “brains” in our Observability in Action series, where our technical solutions engineers provide hands-on insight as to how you can improve your day-to-day operations by getting the most out of full-stack observability.    The last thing I will mention here, but certainly not the least interesting...and far from it, is being able to watch and listen to some of the best customer stories about how we at Cisco AppDynamics are helping them succeed. Our Helping our Customers Succeed playlist is a personal favorite of mine because the customers are why we do, what we do, every day...with passion!    But don’t take my word for it, go visit, get inspired, and share your thoughts in the comments! 
I was wondering if there was a splunk app or a feature available to have a search bar when filtering by Splunk App.    Every time, you have to scroll for a bit just looking for the correct Splu... See more...
I was wondering if there was a splunk app or a feature available to have a search bar when filtering by Splunk App.    Every time, you have to scroll for a bit just looking for the correct Splunk app, even if its just the search app. Is there a way to add a search bar for the apps? We have on for other pages and options.     I may be overlooking something. 
Hello! I am trying to upgrade to the latest version of Splunk Enterprise 9.3 on a RHEL 8 server, but I am getting this error message after accepting the license. Any one seen this error? I have chec... See more...
Hello! I am trying to upgrade to the latest version of Splunk Enterprise 9.3 on a RHEL 8 server, but I am getting this error message after accepting the license. Any one seen this error? I have checked the permissions, and they are all fine. Thanks! Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding Python runtime state: core initialized Traceback (most recent call last): File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 846, in exec_module File "<frozen importlib._bootstrap_external>", line 982, in get_code File "<frozen importlib._bootstrap_external>", line 1039, in get_data PermissionError: [Errno 1] Operation not permitted: '/opt/splunk/lib/python3.9/encodings/__init__.py'
Summary While I did not initially set out to benchmark filesystem performance on our Linux-based Splunk enterprise indexers, we ended up doing so while striving to optimize the indexing tier’s I/... See more...
Summary While I did not initially set out to benchmark filesystem performance on our Linux-based Splunk enterprise indexers, we ended up doing so while striving to optimize the indexing tier’s I/O performance. Based on previous Splunk .conf presentations the idea was to switch from ext4 to XFS to maximise disk performance. However, after changing to XFS the I/O performance decreased rather than increased over time. The Splunk-based indexer workloads tested included around a million searches/day and ingestion of around 350GB data per indexer per day. The ext4 filesystem consistently outperformed XFS in terms of the introspection measure “avg_total_ms” on multiple indexer clusters. What caused a more significant performance impact was maintaining 20% free disk space versus 10% free disk space. Measuring Linux I/O performance There are multiple ways to measure I/O in Linux, here are a few options I have used. iostat Refer to Digging Deep into Disk Diagnoses (conf 2019) for an excellent discussion on iostat usage. Pro’s Very flexible Provides all required statistics accurately Con’s You may need to use per-second measurements so you do not “miss” the spike in latency which is affecting indexing iostat is a great CLI utility, however you need to get the data into another tool to graph/compare Linux kernel I/O statistics As per the kernel documentation for I/O statistics fields the /proc/diskstats file is used by iostat to measure a difference in the I/O counters. Assuming you have iostat running for a period of time you can compare the counter values to the previously seen counter value. This is why the first iostat output is from system boot time unless the -y flag is used. Splunk Add-on for Unix and Linux Pro’s Easy to setup Runs iostat as a shell script Con’s This addon measures iostat data incorrectly as it doesn’t keep iostat running, I have logged idea https://ideas.splunk.com/ideas/APPSID-I-573 about this issue. I have also advised developers/support (in detail) of the issue via a support case in 2023, but as of July 2024 I do not believe the issue is resolved. Metricator application for Nmon The Nmon utility appears to result in accurate I/O data. However, the measurements are often “different” from iostat. For example the disk service time is the average service time to complete an I/O, it is similar to await or svctm in iostat but it is a larger value in Nmon (it does correlate as expected) Pro’s Metricator provides useful graphs of the I/O data Accurate data Con’s Measurements are “different” to other utilities May have “too much” data for some Splunk’s _introspection data Splunk enterprise records I/O data in the _introspection index by default, this data correlated with the Nmon/iostat data as expected. At the time of writing I did not find documentation on the introspection I/O metrics. In Alerts for SplunkAdmins I have created the dashboard splunk_introspection_io_stats to display this data, there are also views in the Splunk monitoring console. Measurement summary Measurement tool of choice Nmon and _introspection were used, Splunk Add-on for Unix and Linux provided metrics that did not match the iostat data or Nmon/_introspection data. Therefore this addon’s results were were not used. Variation in I/O performance Splunk user searches will change I/O performance, in particular SmartStore downloads or I/O spikes changed disk service times. You can use the report “SearchHeadLevel — SmartStore cache misses — combined” in Alerts for Splunk Admins for an example query, or the smartstore stats dashboard. Per-server variance I/O performance also varied per server irrelevant of tuning settings, for an unknown reason some servers just had “slower” NVMe drives than others (with a similar I/O workload). Choice of measurement statistic There are many statistics for disk performance in the _introspection index, we have data.avg_service_ms (XFS performed better), data.avg_total_ms (ext4 performed better). With the Nmon data, DGREADSERV/DGWRITESERV were lower on ext4 and this correlated with “data.avg_total_ms” from the _introspection index in Splunk. Furthermore, this seemed to correlate with the ‘await’ time reported in iostat. Additional measurements DGBACKLOG from Nmon was lower (back log time ms) on ext4, however disk busy time was higher. ext4 also resulted in more write and disk write merge operations. The total service time for an I/O operation was consistently lower under ext4 vs XFS, thus the recommendation and choice of ext4 going forward. Filesystem tuning & testing /etc/fstab settings ext4 — noatime,nodiratime (also tested with defaults) XFS — noatime,nodiratime,logbufs=8,logbsize=256k,largeio,inode64,swalloc,nobarrier (also tested with defaults) Changing filesystems To switch filesystems I re-formatted the partition with the required filesystem (a complete wipe), I let SmartStore downloads re-populate the cache over time. Metricator/nmon along with Splunk’s _introspection data was used to compare performance of the filesystems on each server. Performance improved (initially) after the switch to XFS, however it was later determined that the performance improvement related to the percent of the partition / disk that was left free. There was a noticeable increase in response times after the partition dropped below 20% free space towards the 10% free set in Splunk’s server.conf settings. Keeping 10% of the disk free is often recommended online for SSD drives, we increased our server.conf setting for minFreeSpace to 20% to maximise performance. Server setup All servers were located on-premise (bare metal), 68 indexers in total. 4 NVMe drives per server (3.84TB read intensive disks), Linux software raid (mdraid) in RAID 0 was used. The total disk space was 14TB/indexer on a single filesystem for the SmartStore cache and indexes/DMA data. Ext4 vs XFS graphs The graph below depicts, for an equivalent read/write workload the “average total ms” value, which I’ve named “average wait time” in the graphs. I’ve taken the total response time (sum) of the 4 disks on each server across multiple servers. I also tested alternative ways to measure this value, such as perc95 of response times across the 4 disks. ext4 appeared to be faster in all cases. Average wait time for ext4/XFS (30 days): Read/write KB for ext4/XFS (30 days):   The below graph depicts a similar read/write workload with a 24 hour timespan:   This graph shows reads/writes per second, ext4 does have more writes per second in some instances, however XFS has longer wait times. Average wait time and IOPS for ext4/XFS (24 hours):   What about OS version changes? While I did not keep the graphs as evidence the general trend was a newer kernel version resulted in lower service times on ext4. Cent OS 7 / kernel 3.10 generally had lower performance than servers running Redhat 8.5 / kernel 4.6.x. This in turn was slower than servers with Oracle 8 / kernel 5.4.x UEK I did not have enough data to draw a conclusion, but there was a definite trend on the servers with newer kernel versions having lower latency times at disk level. Conclusion The ext4 filesystem for our Splunk indexer workload, which involved over 1 million searches day and around 350GB/data/day per indexer was generally faster than XFS in terms of the avg_total_ms measurement. What made a greater difference in performance was leaving 20% of the disk space on the filesystem free, this applied to both ext4 and XFS. Finally, newer kernel versions appear to also improve I/O performance with ext4, this comparison was not done with XFS. If you are running a Splunk indexer cluster I would suggest testing out ext4 if you are currently using XFS. Let me know what you find in the comments. This article was originally posted on medium, Splunk Indexers — ext4 vs XFS filesystem performance
Generative AI for SPL -- Faster Results   AIA will generate SPL from natural language, explain SPL and answer how-to product questions. With AIA, more users in your organization can get the full va... See more...
Generative AI for SPL -- Faster Results   AIA will generate SPL from natural language, explain SPL and answer how-to product questions. With AIA, more users in your organization can get the full value of Splunk insights and lessen the burden on your administrators. In this session, we will discuss: Splunk’s differentiating approach to GenAI Introduction to AI Assistant for SPL Technical Review of the AI Assistant LLM under the hood AI Assistant for SPL Demo How to activate Here is the full Tech Talk:
Hi, I have my Splunk Dashboard created in Dashboard studio. The dashboard has 3 tables and all the values in this tables are either Left or Right aligned but I want them to be Center aligned. I... See more...
Hi, I have my Splunk Dashboard created in Dashboard studio. The dashboard has 3 tables and all the values in this tables are either Left or Right aligned but I want them to be Center aligned. I tried finding solutions, but all the solutions mentioned in other posts are for the Classic dashboards which are written in XML. How can we do this in JSON written Dashboard. Thanks, Viral
Hello All, I have a lookup file which stores data of hosts across multiple indexes.  I have reports which fetch information of hosts from each index and updates the records in lookup file. Can ... See more...
Hello All, I have a lookup file which stores data of hosts across multiple indexes.  I have reports which fetch information of hosts from each index and updates the records in lookup file. Can I run parallel search for hosts related to each index and thus parallelly update the same lookup file? Or is there any risk of performance, consistency of data? Thank you Taruchit
Hi Splunkers The idea is to pull any new file creations on a particular folder inside C:\users\<username>\appdata\local\somefolder i wrote a batch script to pull and index this data. its working bu... See more...
Hi Splunkers The idea is to pull any new file creations on a particular folder inside C:\users\<username>\appdata\local\somefolder i wrote a batch script to pull and index this data. its working but the issue is i cannot define a token for users. eg: In script if i mention the path as C:\users\<user1>\appdata\local the batch script will run as expected an data will be indexed to splunk but if i mention the user1 as %userprofile% or %localappdata% the batch script is not running. How to resolve this