All Topics

Top

All Topics

Hello, I am working on a Splunk query and I need help adjusting my rex command to get two fields that are in one field into their own fields. Example below: index=test sourcetype=test category=test ... See more...
Hello, I am working on a Splunk query and I need help adjusting my rex command to get two fields that are in one field into their own fields. Example below: index=test sourcetype=test category=test | rex field=user "(?<region>[^\/]+)\/(?<username>[^\w].+)" | fillnull t | sort _time | table _time, username, user, region, sourcetype,  result, t | bin span=1d _time | dedup t The user field has: test\test1 and I need it to split that so username=test region=test1
I want to show statistics of daily volume and latest events for all the sourcetypes in single table, can you please help.
Does anyone know if the current Dynatrace add on will be updated to use the Dynatrace V2 API? We have a requirement to ingest some web app metrics from dynatrace that are not easily available to th... See more...
Does anyone know if the current Dynatrace add on will be updated to use the Dynatrace V2 API? We have a requirement to ingest some web app metrics from dynatrace that are not easily available to the V1 API and would also like to know that the add on will be functional if/when the V1 API is made redundant.  
Hi Community,   I have two separate Splunk installs: one is the 8.1.0 version and another one is 8.2.5 The older version is our production Splunk install. I can see a lag in the dashboard set-u... See more...
Hi Community,   I have two separate Splunk installs: one is the 8.1.0 version and another one is 8.2.5 The older version is our production Splunk install. I can see a lag in the dashboard set-up which calculates the difference between the index time and the actual time. Since its production environment, I assumed that the lag might be due to the below reasons. The universal forwarder is busy as it's doing a recursive search through all the files within the folders. This is done for almost 44 such folders. Example: [monitor:///net/mx41779vm/data/apps/Kernel_2.../*.log] The forwarder might be outdated to handle such loads. The version used is 6.3.3 Splunk install is busy waiting as there is already a lot of incoming data from other forwarders. In order to clarify the issue, I set up the same in another environment. This is a test environment which does not have a heavy load as in production but has the same settings with reduced memory. When I set up a completely new forwarder, and replicate the setup in the test environment, I still see the same lag. This is very confusing as to why it's happening? Could someone provide me with tips or guidance on how to work through this issue? Thanks in advance.   Regards, Pravin  
I want to create an alert that pops up when the events match at least 500 times the same source IP address, same destination address and different destination ports in 1 minute.  The search I've come... See more...
I want to create an alert that pops up when the events match at least 500 times the same source IP address, same destination address and different destination ports in 1 minute.  The search I've come up with so far is as follows, although I'm not sure it's what I really need:    index=net-fw (src_ip=172.16.0.0/12 OR src_ip=10.0.0.0/8 OR src_ip=192.168.0.0/16) AND (dest_ip=172.16.0.0/12 OR dest_ip=10.0.0.0/8 OR dest_ip=192.168.0.0/16) action IN (allowed blocked) | stats first(_time) as date dc(dest_port) as num_dest_port by src_ip, dest_ip | where num_dest_port >500 | convert ctime(date) as fecha   I think what I am missing to achieve is "with the same source IP and the same destination IP in one minute". Could someone help me with this problem? Thanks in advance and best regards.
Hello, I am developing an order to replay an alert. I'm not sure if it's a good idea to use the same method as the one used in the previous version of the program. For the replay after having det... See more...
Hello, I am developing an order to replay an alert. I'm not sure if it's a good idea to use the same method as the one used in the previous version of the program. For the replay after having determined the rules which will have to sound I pass by kwargs_block = {'dispatch.earliest_time':earliest, "dispatch.latest_time":latest, "trigger_actions":self.trigger} job = search.dispatch(**kwargs_block) Here is an example of a replay started at 11:52, but its scheduled task starts at 30 of each hour so I would like to have 11:30. Do you have any idea how to set the date of indexation of the alert?  
The objective is to display multiple modifications done by the Submitter, And to show number of modifications, respective filenames and hash names. Example : Submitter John did 15 modifications, 3... See more...
The objective is to display multiple modifications done by the Submitter, And to show number of modifications, respective filenames and hash names. Example : Submitter John did 15 modifications, 3 modification to file app.exe 2 modifications to gap.exe 10 modifications to rap.exe. So the display should show 15 hash files . And my SPL does the job. The SPL ends with   | stats values(risk_country) AS extreme_risk_country, list(flagged_threat) AS flagged_threat, list(times_submitted) AS times_submitted, list(md5_count) AS unique_md5, list(meaningful_name) AS file_name, list(md5_value) as md5 by submitter_id   I do see the results, but I am unable to easily eye-ball where the hash file of one filename ends and other one begins. especially when there are lots of hashes.  Please check the attachment of the output I am getting. I want to easily see/distinguish where one set of hashes finish for a file and other one starts. I am looking for suggestions to achieve it in some way to look it visually separate . Thank you.  
Hello Splunkers I have a query regarding number of indexers or indexer clusters that can reside in a single site clustering suppose i have 400 indexers  is there a limit as such for the number of... See more...
Hello Splunkers I have a query regarding number of indexers or indexer clusters that can reside in a single site clustering suppose i have 400 indexers  is there a limit as such for the number of indexers in single site?? and another question is how many indexers can i place in a indexer cluster can it be more than 3?
Sumologic Query:   _source="VerizonCDN" | json field=_raw "path" | json field=_raw "client_ip" | json field=_raw "referer" | where %referer = "" | where %status_code = 200 | json field=_raw "u... See more...
Sumologic Query:   _source="VerizonCDN" | json field=_raw "path" | json field=_raw "client_ip" | json field=_raw "referer" | where %referer = "" | where %status_code = 200 | json field=_raw "user_agent" | count by %host,%path,%client_ip,%referer,%user_agent | where _count >= 100 | order by _count desc   and my conversion to splunk:   source="http:Emerson_P1CDN" AND status_code=200 AND referer="" | stats count by host,path,client_ip,referer,user_agent | where count >= 100 | sort - count   Do think I convert it right? because the result of splunk was different from sumologic.
Is there a way to configure an external repository as the default one. I noticed that when I create a new playbook or modify an existing playbook from another remote repository, it always gets saved ... See more...
Is there a way to configure an external repository as the default one. I noticed that when I create a new playbook or modify an existing playbook from another remote repository, it always gets saved into the local repository. How do I change that behaviour to make another repository as the default repository? I am on SOAR on prem 5.1.0.
I need to get count of events by day by hour or half-hour using a field in splunk log which is a string whose value is date - e.g. eventPublishTime: 2022-05-05T02:20:40.994Z I tried some variations... See more...
I need to get count of events by day by hour or half-hour using a field in splunk log which is a string whose value is date - e.g. eventPublishTime: 2022-05-05T02:20:40.994Z I tried some variations of below query, but it doesn't work.  How should I formulate my query? index=our-applications env=prod | eval publishTime=strptime(eventPublishTime, "%Y-%m-%dT%H:%M:%SZ") | convert timeformat="%H:%M" ctime(publishTime) AS PublishHrMin | convert timeformat="%Y-%m-%d" ctime(_time) AS ReceiptDate | stats c(ReceiptDate) AS ReceiptDateCount by ReceiptDate, parentEventName,, PublishHrMin Thank you    
Hi, I have a custom Python script developed in Splunk where it will translate Chinese characters to English. The custom search was built following the guide below: https://dev.splunk.com/enterpri... See more...
Hi, I have a custom Python script developed in Splunk where it will translate Chinese characters to English. The custom search was built following the guide below: https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/ However, when we perform a search, the no. of Events does not tally with Statistics. For example, there are total of 8 events but only 1 in statistics. Sometimes it tallies, but most of the time it doesn't. Would like to know if this is a limitation within Splunk when using custom scripts or is there some configuration that is not taking place? Appreciate the help.
The below setup doesn't appear to index the script's output and I can't figure out why.  Even the basic one-liner example in their documentation (https://docs.splunk.com/Documentation/Splunk/latest/D... See more...
The below setup doesn't appear to index the script's output and I can't figure out why.  Even the basic one-liner example in their documentation (https://docs.splunk.com/Documentation/Splunk/latest/Data/MonitorWindowsdatawithPowerShellscripts) doesn't produce indexed events for me.  I've tried several variations on how the data is being formatted.  I know the script executes because the file change it makes is occurring. configureBINDIP.ps1 $launchConfFile = "C:\Program Files\SplunkUniversalForwarder\etc\splunk-launch.conf" $launchConfSetting = "SPLUNK_BINDIP=127.0.0.1" function CraftEvent ($message) { $event = [PSCustomObject]@{ "SplunkIndex" = "windows" "SplunkSource" = "powershell" "SplunkSourceType" = "Powershell:ConfigureBINDIP" "SplunkHost" = "mysplunkhost" "SplunkTime" = (New-TimeSpan -Start $(Get-Date -Date "01/01/1970") -End $(Get-Date)).TotalSeconds "Message" = $message } Return $event } if (-not (Test-Path $launchConfFile) ) { $event = [PSCustomObject]@{ "Message" = "Could not locate splunk-launch.conf: $launchConfFile" } Write-Output $event | Select-Object exit } if ( (Get-Content $launchConfFile ) -notcontains $launchConfSetting ) { $message = "Appending '$launchConfSetting' to '$launchConfFile'" "`r`n$launchConfSetting" | Out-File $launchConfFile -Append utf8 if ( (Get-Content $launchConfFile ) -contains $launchConfSetting ) { $message += ".... splunk-launch.conf update successful. Please remove this host from the app to restart." } else { $message += ".... splunk-launch.conf does not appear updated. Please continue to monitor." } } else { $message = "splunk-launch.conf already appears updated. Please remove this host from the app to restart." } $event = [PSCustomObject]@{ "Message" = $message } Write-Output $event | Select-Object inputs.conf [powershell://ConfigureBINDIP] script = . "$SplunkHome\etc\apps\configure_bindip\bin\configureBINDIP.ps1" index = windows source = powershell sourcetype = Powershell:ConfigureBINDIP  web.conf [settings] mgmtHostPort = 127.0.0.1:8089  
Our IIS logs contain a "time_taken" field which indicates the number of milliseconds each event took. I'd like to use the data from this field, along with the actual event _time (what I'm thinking of... See more...
Our IIS logs contain a "time_taken" field which indicates the number of milliseconds each event took. I'd like to use the data from this field, along with the actual event _time (what I'm thinking of as the time the server responded, or the "responseTime") to create a chart showing how many events were "in progress" over time. It's easy enough to calculate the "requestTime" by doing something like this:           eval requestTime = _time - time_taken What I'm missing is how to generate a graph with time (to the second) along the X-axis and total number of events in progress at that time on the Y-axis. For example, if a request was logged at 12:00:06 pm and had a time_taken of 3,000 ms (thus the "requestTime" was 12:00:03), then I would want it to be counted in 4 columns: 12:00:03, 12:00:04, 12:00:05, 12:00:06, indicating that this request was "in progress" during each of those times.  Essentially, I want something like the output of the command below, but I want it to be a count of all events in progress during each of those seconds rather than just a discreet count of events based on their "_time"           timechart count span=1s
We are on the Splunk Free license, which has a daily indexing limit of 500Mb. This has never before been a problem because we've had a pretty consistently stable +2Mb/day log volume. The total size o... See more...
We are on the Splunk Free license, which has a daily indexing limit of 500Mb. This has never before been a problem because we've had a pretty consistently stable +2Mb/day log volume. The total size of ALL of our logs, 150Mb, is far less than the daily limit. Yet somehow Splunk has complained and shut down our license. Does anyone have familiarity with this kind of error? Why would it trigger on such a small log database and low flow rate?
Use this technical support tool to collect information for troubleshooting Support Report is a technical support tool AppDynamics customers and Support Engineers can use to collect and archive the ... See more...
Use this technical support tool to collect information for troubleshooting Support Report is a technical support tool AppDynamics customers and Support Engineers can use to collect and archive the information needed to correctly identify technical issues.  This article explains the tool, and its configuration and use. In this article... What is Support Report? How do I configure and run Support Report? What happens once a report is generated? Additional Resources What is Support Report? Support Report is a technical support tool that facilitates the process of collecting and archiving data, for AppDynamics customers and Support Engineers alike. It gathers general operating system information vital for correctly identifying issues—including hardware-specific information, logs, and configuration—both from the AppDynamics On-prem components (such as Controller, Enterprise Console, EUM, Events Service) and from across the operating system.  Prerequisites Support Report works on most Linux flavors. It needs only Bash Unix shell, which is available on every Linux. It is designed to not crash even when a fundamental tool is unavailable. Though the script can be run by either the regular user or by root, the tool will only be able to collect all information and logs when it is running by root. That said, it will not crash or give up when running as a regular user. All of the above make dependencies and requirements very low, allowing it to work accurately on any machine.  How it works The tool is made from a Bash script and tries to gather information about the system from generally available places and basic system tools. If a particular tool is not present in the customer's environment, this script will keep working, simply informing that the tool is not there. The script is meant to detect Linux flavor, adjusting all needed paths and behaviors accordingly. If, as strongly recommended, the customer provides a password to the MySQL database, the tool will connect to it and gather information from the database as well. The tool does not collect any customer metrics or other sensitive information from the database.  Since AppDynamics applications can be installed freely in any directory on a server, and support_report is not an official part of the AppD package, the tool needs to correctly detect where actually files of interest are located. It is quite easy to tell when the Controller process is running. But this tool troubleshoots based on the more difficult scenario of no running AppD processes. It looks for the correct path by "brute force", by finding specific files on the server. How do I run the Support Report? The Support Report tool can be attached to the Zendesk support ticket by an AppDynamics Engineer, or it can be downloaded from this article (see below). After it is uploaded (by SCP, or in any other way convenient for the customer) to a server where troubleshooting needs to be performed, the tool can be run from any directory. Use the controls to pick the specific information you’d like to collect—and to disable the information you DON’T want to share. Below is a sample run with ‘-help’ option , where all available parameters are described. $ ./support-report.sh -help Usage: support-report.sh [ -CEUScpHlazeoxv ] [ -d days of logs ] [ -o dir ] -C Collect information about Controller -E Collect information about Enterprise Console -U Collect information about EUM server -S Collect information about Events Service -c Disable generating system configuration -p Enable measuring system load/performance. It will be 720 of 5s samples. 1h in total. -H Disable generating hardware report -l Disable gathering system logs -a Disable gathering AppD logs -d Number of days back of logs to retrieve (default is 3 days) -z Do not zip report and leave it in /tmp -e Encrypt output report archive with password -o Set the support-report output path -x Keep the support-report logs in /tmp for debugging -v Version What happens once a report is generated? Once a report is generated, the entire report archive is stored in a location related to AppDynamics component, on the customer’s server. The customer can easily review the report’s information before sending it to technical support. Example output from  the tool: root@appd-ha1:~# ./support-report.sh Determining system environment and configuration... Provide controller MySQL root user password: Provide Controller root user password (hit enter to skip): Generating report... Building system configuration Building package list Checking hypervisor Getting EC2 instance info Copying hardware profile Memory information Storage information Copying system logs..Done! Getting systemd info Networking information Init info Checking time config Checking AppD environment Numa stats Fetching install user environment Get processes. Done! Collecting TOP output Creating Appdynamics files list Getting selinux config Controller logs Collecting rotating logs from 3 days Mysql Controller logs Controller configs Controller Keystore content Collecting Controller SQL queries Controller related information Controller report HA and DB replication status Creating report archive... Done The support-report has been saved to: /appdynamics/platform/product/controller/logs/support-report/support-report_controller_appd-ha1.conserit.pl_2022-05-20_00-15-22.tar.gz You will be directed where to submit this report by your technical support contact. The tool's report output can be attached to a Zendesk support ticket proactively by the customer. It will greatly speed up the troubleshooting process, as very likely all the needed information to help - will already be present in the initial Zendesk message! Additional Resources How do I submit a Support ticket? An FAQ A guide to AppDynamics Help resources
Hi everyone, I'm trying to complete the lab exercises on one of the trainings. I was probably about 2/3 done, and then my lab environment froze. I tried to refresh and it's saying the server is takin... See more...
Hi everyone, I'm trying to complete the lab exercises on one of the trainings. I was probably about 2/3 done, and then my lab environment froze. I tried to refresh and it's saying the server is taking too long to respond. Also tried relaunching the lab session with the same result. I emailed "live support" at elearn@splunk.com with my issue but it's been 45 minutes and I haven't even received an acknowledgement of my inquiry. Any suggestions? Anyone know how long I should wait? Will also note I tried asking for a grade for the stuff I completed, it can't grade anything.  I think my work is just gone but it says "If you are experiencing access issues DO NOT shutdown your servers. Leave them running and contact elearn@splunk.com for assistance." and my countdown to decommissioning is ticking.
Today I noticed that one of the heavy forwarders in our distributed environment was not calling back to the deployment server, fetching config. Checking the logs on the HF I noticed: DC:Deploymen... See more...
Today I noticed that one of the heavy forwarders in our distributed environment was not calling back to the deployment server, fetching config. Checking the logs on the HF I noticed: DC:DeploymentClient [3909 MainThread] - target-broker clause is missing. DC:DeploymentClient [3909 MainThread] - DeploymentClient explicitly disabled through config. DS_DC_Common [3909 MainThread] - Deployment Client not initialized. DS_DC_Common [3909 MainThread] - Loading and initializing Deployment Server... DeploymentServer [3909 MainThread] - Attempting to reload entire DS; reason='init' DSManager [3909 MainThread] - No serverclasses configured. DSManager [3909 MainThread] - Loaded count=0 configured SCs I tried the  "splunk display deploy-client" telling me that the "Deployment Client is disabled." I am pretty sure this is why the HF is not phoneing home or fetching new config, though I cannot figure out why? The "deploymentclient.conf" file is identicall for all our HFs, stored in /etc/apps/xxx/default/deploymentclient.conf A grep-search for "target-broker" revealed no duplicate/hidden/conflicting files locally generated. Traffic is allowed as I am able to telnet to DS:8089. I have tried restarting splunk on the HF with no success, same "DC:DeploymentClient" problems. Why is this only affecting the one HF and not the others? How can I resolve this issue? Best regards // G
Hello, Is it possible to forward same data to different Splunk platforms / indexers clusters without using double license usage? Thanks  
We are excited to announce the preview of Splunk ITSI custom threshold windows (CTW). ITSI CTW allows you to adjust your severity levels when an expected abnormal behavior may arise - e.g. public hol... See more...
We are excited to announce the preview of Splunk ITSI custom threshold windows (CTW). ITSI CTW allows you to adjust your severity levels when an expected abnormal behavior may arise - e.g. public holidays, peak day of the year or month, or large retail moment like Black Friday. Need access to the ITSI CTW preview? Complete this brief application, we will contact you if there is space and availability to participate!  Already have access to the preview?  Want to access product docs? ITSI custom threshold windows Docs offers detailed guidance on how to use the feature  Want to request more features? Add your ideas and vote on other ideas at ITSI custom threshold windows Ideas Portal  Please reply to the thread below with any questions or to get support from the Splunk team, our product and engineering teams are subscribed to this post and will be checking for feedback and questions! Happy Testing, — Alyssa Niles, Senior Manager, Early Product Adoption Team