All Topics

Top

All Topics

I created an API test with Synthetics but I can't set up a detector to check if 2 consecutive requests (2 in a row) are in error. Is there any way to configure the detector to raise an alarm if 2 req... See more...
I created an API test with Synthetics but I can't set up a detector to check if 2 consecutive requests (2 in a row) are in error. Is there any way to configure the detector to raise an alarm if 2 requests in a row go in error?
Hi Team, @ITWhisperer @gcusello  I am parsing the CSV data to Splunk, testing in dev windows machine from UF. This is the sample csv data: Subscription Name  Resource Group Name  Key Vault Na... See more...
Hi Team, @ITWhisperer @gcusello  I am parsing the CSV data to Splunk, testing in dev windows machine from UF. This is the sample csv data: Subscription Name  Resource Group Name  Key Vault Name  Secret Name  Expiration Date  Months SUB-dully  core-auto  core-auto  core-auto-cert  2022-07-28 -21 SUB-gully  core-auto  core-auto  core-auto-cert  2022-07-28 -21 SUB-pally  core-auto  core-auto  core-auto-cert  2022-09-01 -20   The output i am getting, all events in single event. I created inputs.conf, sourcetype where the sourcetype configurations are Can anyone help me why is it's not breaking.  
What do I need to know about default ASP .NET Core hosting modules and how they affect AppDynamics APM .NET Agent configuration?   In this article...  Who would use this workflow? What is a H... See more...
What do I need to know about default ASP .NET Core hosting modules and how they affect AppDynamics APM .NET Agent configuration?   In this article...  Who would use this workflow? What is a Hosting Module?  What are the defaults?  Implementation   InProcess hosting module  OutOfProcess hosting module  Who would use this workflow?   ASP.NET Core has two hosting modules that change how the AppDynamics APM .NET Agent needs to be configured. The default hosting module is different for the different .NET Core versions.   This discussion is largely centered around using Windows.  What is a Hosting Module?    The ASP.NET Core Module (ANCM) is a native IIS module that plugs into the IIS pipeline, allowing ASP.NET Core applications to work with IIS. Run ASP.NET Core apps with IIS by either:  Hosting an ASP.NET Core app inside of the IIS worker process (w3wp.exe), called the in-process hosting model.  Forwarding web requests to a backend ASP.NET Core app running the Kestrel server, called the out-of-process hosting model.  https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/aspnet-core-module    What are the defaults?  .NET Core 1.0 - 2.1 uses InProcess hosting module by default.   In .NET Core 2.2, the default hosting module was switched to OutOfProcess:   https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/aspnet-core-module?view=aspnetcore-2.2#hosting-models-1  Versions of .NET Core later than the writing of this article use InProcess hosting module. Please validate the default by visiting https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/aspnet-core-module.  Note: It is not recommended to use versions of .NET Core that are not LTS. The above text is informational only and it is strongly recommended to keep your version of .NET Core up to date with the latest LTS version. https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core    Implementation with InProcess hosting  If your application is using InProcess hosting module you will implement using the IIS section in the config.xml:   <IIS> <applications> <application path="/" site="ASP NET Core"> <tier name="My InProcess ASP.NET Core Site"/> </application> </applications> </IIS> For more information regarding IIS configuration for the agent, please visit: https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/install-app-server-agents/net-agent/install-the-net-agent-for-windows/name-net-tiers     Implementation with OutOfProcess Hosting  If your application is using OutOfProcess hosting module you will implement using the standalone-application section in the config.xml. With OutOfProcess there should be a dotnet.exe or yourapp.exe running.   If yourapp.exe is present:  <standalone-application executable="yourapp.exe"> <tier name="My OutOfProcess NET Core App" /> </standalone-application> If dotnet.exe is present:  <standalone-application executable="dotnet.exe" command-line="yourapp.dll"> <tier name="My OutOfProcess NET Core App" /> </standalone-application>  For more information regarding standalone-application configuration for the agent, please visit: https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/install-app-server-agents/net-agent/install-the-net-agent-for-windows/configure-the-net-agent-for-windows-services-and-standalone-applications    The easiest way to confirm your hosting module is to open Task Manager and identify if there are any dotnet.exe or yourapp.exe running on the machine. Please validate that the application is running first by sending a couple of requests to the site. 
If you don’t have an application ready, we’ll use the included sample Tomcat application image in our task definition file.  In this article…   Sample Tomcat application  ECS Permissions  A... See more...
If you don’t have an application ready, we’ll use the included sample Tomcat application image in our task definition file.  In this article…   Sample Tomcat application  ECS Permissions  ADOT Role |  ADOTTaskRole  Build your own image  Additional resources  Sample Tomacat application In the following, you will need to edit all the sections marked “XXXXX”  { "family": "aws-opensource-otel", "containerDefinitions": [ ##### Application image { "name": "aws-otel-emitter", "image": "docker.io/abhimanyubajaj98/tomcat-app-buildx:latest", "cpu": 0, "portMappings": [ { "name": "aws-otel-emitter", "containerPort": 8080, "hostPort": 8080, "protocol": "tcp", "appProtocol": "http" } ], "essential": true, "environment": [ { "name": "APPDYNAMICS_AGENT_ACCOUNT_NAME", "value": "XXXXX" }, { "name": "APPDYNAMICS_AGENT_TIER_NAME", "value": "abhi-tomcat-ecs" }, { "name": "APPDYNAMICS_CONTROLLER_PORT", "value": "443" }, { "name": "JAVA_TOOL_OPTIONS", "value": "-javaagent:/opt/appdynamics/javaagent.jar" }, { "name": "APPDYNAMICS_AGENT_APPLICATION_NAME", "value": "abhi-ecs-fargate" }, { "name": "APPDYNAMICS_CONTROLLER_HOST_NAME", "value": " XXXXX.saas.appdynamics.com" }, { "name": "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME_PREFIX", "value": "abhi-tomcat-ecs" }, { "name": "APPDYNAMICS_CONTROLLER_SSL_ENABLED", "value": "true" }, { "name": "APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY", "value": " XXXXX " }, { "name": "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME", "value": "true" } ], "mountPoints": [], "volumesFrom": [ { "sourceContainer": "appdynamics-java-agent" } ], "dependsOn": [ { "containerName": "appdynamics-java-agent", "condition": "START" } ], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-create-group": "True", "awslogs-group": "/ecs/ecs-aws-otel-java-tomcat-app", "awslogs-region": "us-west-2", "awslogs-stream-prefix": "ecs" } }, "healthCheck": { "command": [ "CMD-SHELL", "curl -f http://localhost:8080/sample || exit1" ], "interval": 300, "timeout": 60, "retries": 10, "startPeriod": 300 } }, #####Java Agent configuration { "name": "appdynamics-java-agent", "image": "docker.io/abhimanyubajaj98/java-agent-ecs", "cpu": 0, "portMappings": [], "essential": false, "environment": [], "mountPoints": [], "volumesFrom": [], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-create-group": "true", "awslogs-group": "/ecs/java-agent-ecs", "awslogs-region": "us-west-2", "awslogs-stream-prefix": "ecs" } } } ], "taskRoleArn": "arn:aws:iam::778192218178:role/ADOTRole", "executionRoleArn": "arn:aws:iam::778192218178:role/ADOTTaskRole", "networkMode": "bridge", "requiresCompatibilities": [ "EC2" ], "cpu": "256", "memory": "512" } ECS Permissions  Your ECS task should have the appropriate permission. For the example here, I created a taskRole ADOTRole and taskexecutionrole ADOTTaskRole.   ADOTRole Permission Policy  The permission policy for ADOTRole looks as follows:  { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:DescribeLogGroups", "logs:PutRetentionPolicy", "xray:PutTraceSegments", "xray:PutTelemetryRecords", "xray:GetSamplingRules", "xray:GetSamplingTargets", "xray:GetSamplingStatisticSummaries", "cloudwatch:PutMetricData", "ec2:DescribeVolumes", "ec2:DescribeTags", "ssm:GetParameters" ], "Resource": "*" } ] } ADOTTaskRole Permission Policy  The permission policy for ADOTTaskRole looks like: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:DescribeLogGroups", "logs:PutRetentionPolicy", "xray:PutTraceSegments", "xray:PutTelemetryRecords", "xray:GetSamplingRules", "xray:GetSamplingTargets", "xray:GetSamplingStatisticSummaries", "cloudwatch:PutMetricData", "ec2:DescribeVolumes", "ec2:DescribeTags", "ssm:GetParameters" ], "Resource": "*" } ] } Build your own image  Going back to the template, you can build your own image as well. The Dockerfile for the image can be found here, along with the task definition file  https://github.com/Abhimanyu9988/ecs-java-agent  Additional resources  To understand more about the AppDynamics Java Agent,  see the following in the Documentation portal  https://docs.appdynamics.com/appd/22.x/22.12/en/application-monitoring/install-app-server-agents/jav... 
Taking a Udemy Splunk introductory course module about macros. The string works fine in Search, but not as a macro named fileinfo - get the above error.  index=web | eval megabytes=bytes/1024/10... See more...
Taking a Udemy Splunk introductory course module about macros. The string works fine in Search, but not as a macro named fileinfo - get the above error.  index=web | eval megabytes=bytes/1024/1024 | stats sum(megabytes) as "Megs" by file | sort – Megs  
Attempting to address an issue where some of my org's larger playbooks refuse to load in the SOAR playbook editor . Support as usual disappoints by throwing their hands up in the air referencing "Bes... See more...
Attempting to address an issue where some of my org's larger playbooks refuse to load in the SOAR playbook editor . Support as usual disappoints by throwing their hands up in the air referencing "Best Practices" and demanding we reduce the size of our playbooks. When I ask them to back their position by asking for documentation there is none. We're finding ourselves having to disable automations and workflows simply because we can't even load these workflows in the editor to perform routine fixes. Even after escalating to our account team, we're still getting the "reduce the size of your playbooks answer". Their workaround for not being able to load the playbook in the current version to rewrite them is to to rebuild a SOAR enviornment in 5.x so we can make these edits 🤬. Has anyone else experienced this? Is the only resolution rewriting playbooks to break them up? Version 6.1 Attempted the newest release, in a lab, no improvement.
I already have the Salesforce add-on for Splunk. Does Salesforce have an email source that I can tap on to get those emails? Has anyone done it successfully?
Last month, the Splunk Threat Research Team had 3 releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.26.0, v4.27.0, and v4.28.0). With these releases, there ar... See more...
Last month, the Splunk Threat Research Team had 3 releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.26.0, v4.27.0, and v4.28.0). With these releases, there are 18 new analytics, 1 new analytic story, 31 updated analytics, and 2 updated analytic stories now available in Splunk Enterprise Security via the ESCU application update process. Content highlights include: A new analytic story and detections for CVE-2024-27198 and CVE-2024-27199. This security content addresses critical authentication bypass vulnerabilities in JetBrains TeamCity. To learn more about these vulnerabilities and security content, check out our blog. Six new detections for remote monitoring management (RMM) software abuse contributed by @nterl0k. Thank you! New Analytics (18) Cloud Security Groups Modifications by User Detect Remote Access Software Usage File (External Contributor : @nterl0k) Detect Remote Access Software Usage FileInfo (External Contributor : @nterl0k) Detect Remote Access Software Usage Process (External Contributor : @nterl0k) Windows Multiple Account Passwords Changed Windows Multiple Accounts Deleted Windows Multiple Accounts Disabled Detect Remote Access Software Usage DNS (External Contributor : @nterl0k) Detect Remote Access Software Usage Traffic (External Contributor : @nterl0k) High Volume of Bytes Out to Url Detect Remote Access Software Usage URL (External Contributor : @nterl0k) JetBrains TeamCity Authentication Bypass CVE-2024-27198 JetBrains TeamCity Authentication Bypass Suricata CVE-2024-27198 JetBrains TeamCity Limited Auth Bypass Suricata CVE-2024-27199 Nginx ConnectWise ScreenConnect Authentication Bypass Windows Credential Access From Browser Password Store Windows Known Abused DLL Created (External Contributor : @nterl0k) Splunk Authentication Token Exposure in Debug Log New Analytic Stories (1) JetBrains TeamCity Vulnerabilities Updated Analytics (31) AWS IAM Delete Policy O365 Multiple Users Failing To Authenticate From Ip ConnectWise ScreenConnect Authentication Bypass JetBrains TeamCity RCE Attempt Okta User Logins From Multiple Cities Path traversal SPL injection Splunk User Enumeration Attempt AWS Concurrent Sessions From Different Ips AWS Credential Access RDS Password reset Kubernetes Nginx Ingress LFI Kubernetes Nginx Ingress RFI Kubernetes Previously Unseen Process Detect AzureHound Command-Line Arguments Detect AzureHound File Modifications Detect SharpHound Command-Line Arguments Detect SharpHound File Modifications Detect SharpHound Usage Disabling Windows Local Security Authority Defences via Registry Linux Iptables Firewall Modification Linux Kworker Process In Writable Process Path Linux Stdout Redirection To Dev Null File Network Traffic to Active Directory Web Services Protocol System Information Discovery Detection Windows SOAPHound Binary Execution Splunk Command and Scripting Interpreter Risky Commands ASL AWS Concurrent Sessions From Different Ips Gsuite Outbound Email With Attachment To External Domain Detect Excessive Account Lockouts From Endpoint Detect Excessive User Account Lockouts Short Lived Windows Accounts Windows Create Local Account Updated Analytic Stories (2) Cyclops Blink Sneaky Active Directory Persistence Tricks The team also published the following 3 blogs: Security Insights: JetBrains TeamCity CVE-2024-27198 and CVE-2024-27199 Under the Hood of SnakeKeylogger: Analyzing its Loader and its Tactics, Techniques, and Procedures Splunk Security Content for Threat Detection & Response: Q4 Roundup For all our tools and security content, please visit research.splunk.com. — The Splunk Threat Research Team
Hi All, I have one log that is ABC and it is present in sl-sfdc api and have another log EFG that is present in sl-gcdm api now I want to see the properties and error code fields which is present ... See more...
Hi All, I have one log that is ABC and it is present in sl-sfdc api and have another log EFG that is present in sl-gcdm api now I want to see the properties and error code fields which is present in EFG log but it has many other logs coming from different apis also . I only want the log which is having the correlationId same in ABC then it should check the other log .And then I will use this regular expression to get the fields, like spath. Currently I am using this query  index=whcrm ( sourcetype=xl-sfdcapi ("Create / Update Consents for gcid" OR "Failure while Create / Update Consents for gcid" OR "Create / Update Consents done") ) OR ( sourcetype=sl-gcdm-api ("Error in sync-consent-dataFlow:") ) | rename properties.correlationId as correlationId | rex field=_raw "correlationId: (?<correlationId>[^\s]+)" | eval is_success=if(match(_raw, "Create / Update Consents done"), 1, 0) | eval is_failed=if(match(_raw, "Failure while Create / Update Consents for gcid"), 1, 0) | eval is_error=if(match(_raw, "Error in sync-consent-dataFlow:"), 1, 0) | stats sum(is_success) as Success_Count, sum(is_failed) as Failed_Count, | eval Total_Consents = Success_Count + Failed_Count | table Total_Consents, Success_Count, Failed_Count first one is the ABC log and second is the EFG also I want to use this regular expression in between to get the details  | rex field=message "(?<json_ext>\{[\w\W]*\})" | spath input=json_ext Or there can be any other way to write the query and get the counts please help . Thanks in Advance
Hi All, I have time field having time range in this format in output of one splunk query: TeamWorkTimings 09:00:00-18:00:00 I want to have the values stored in two fields like: TeamStart 09:00:... See more...
Hi All, I have time field having time range in this format in output of one splunk query: TeamWorkTimings 09:00:00-18:00:00 I want to have the values stored in two fields like: TeamStart 09:00:00 TeamEnd 18:00:00 How do I achieve this using regex or concat expression in splunk. Please suggest.
Hi Everyone,  For some reason I'm getting  different CSV format file when I downloaded vs from the report generated on scheduled report functionality. - When I downloaded the file from the ... See more...
Hi Everyone,  For some reason I'm getting  different CSV format file when I downloaded vs from the report generated on scheduled report functionality. - When I downloaded the file from the splunk search option I am getting some like: {"timestamp: 2024-04-02T22:42:19.655Z sequence: 735 blablaclasname: com.rr.jj.eee.rrr anotherblablaclasnameName: com.rr.rr.rrrr.rrr level: ERRROR exceptionMessage: blablabc .... } - When I received by email the file using the same query I'm getting something like: {"timestamp: 2024-04-02T22:42:19.655Z\nsequence: 735\nblablaclasname: com.rr.jj.eee.rrr\nanotherblablaclasnameName: com.rr.rr.rrrr.rrr\nlevel: ERRROR\n\nexceptionMessage: blablabc\n....} *.conf file I am seeing: LINE_BREAKER = \}(\,?[\r\n]+)\{? Regards  
I recently updated the apps on a dev search head and got this new error showing up in my _internal logs.  I don`t have any inputs configured currently in the add-on . Has anyone else seen this ? ... See more...
I recently updated the apps on a dev search head and got this new error showing up in my _internal logs.  I don`t have any inputs configured currently in the add-on . Has anyone else seen this ? root@raz-spldevsh:/opt/splunk/etc/apps# tail -n5000 /opt/splunk/var/log/splunk/splunkd.log |grep -E "ERROR" 04-05-2024 11:26:08.663 +0000 ERROR ExecProcessor [690962 ExecProcessor] - Invalid user admin, provided in passAuth argument, attempted to execute command /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_ta_o365/bin/conf_migration.py 04-05-2024 11:26:08.686 +0000 ERROR ExecProcessor [690962 ExecProcessor] - Invalid user admin, provided in passAuth argument, attempted to execute command /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_ta_o365/bin/conf_migration.py 04-05-2024 11:26:08.699 +0000 ERROR ExecProcessor [690962 ExecProcessor] - Invalid user admin, provided in passAuth argument, attempted to execute command /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_ta_o365/bin/conf_migration.py Splunk 9.0.3 App version: 4.5.1    
Thanks in Advance Hi Guys, I need to extract limited values from fields: Query : index="mulesoft" applicationName="s-concur-api" environment=PRD priority timestamp | search NOT message IN ("API:... See more...
Thanks in Advance Hi Guys, I need to extract limited values from fields: Query : index="mulesoft" applicationName="s-concur-api" environment=PRD priority timestamp | search NOT message IN ("API: START: /v1/expense/extract/ondemand/accrual*") | spath content.payload{} | mvexpand content.payload{} |stats values(content.SourceFileName) as SourceFileName values(content.JobName) as JobName values(content.loggerPayload.archiveFileName) as ArchivedFileName values(message) as message min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time by correlationId | rex field=message max_match=0 "Expense Extract Process started for (?<FileName>[^\n]+)" | rex field=message max_match=0 "API: START: /v1/expense/extract/ondemand/(?<OtherRegion>[^\/]+)\/(?<OnDemandFileName>\S+)" | eval OtherRegion=upper(OtherRegion) | eval OnDemandFileName=rtrim(OnDemandFileName,"Job") | eval "FileName/JobName"= coalesce(OnDemandFileName,JobName) | eval JobType=case(like('message',"%Concur Ondemand Started%"),"OnDemand",like('message',"%API: START: /v1/expense/extract/ondemand%"),"OnDemand",like('message',"Expense Extract Process started%"),"Scheduled") | eval Status=case(like('message' ,"%Concur AP/GL File/s Process Status%"),"SUCCESS", like('tracePoint',"%EXCEPTION%"),"ERROR") | eval Region= coalesce(Region,OtherRegion) | eval OracleRequestId=mvappend("RequestId:",RequestID,"ImpConReqid:",ImpConReqId) | eval Response= coalesce(message,error,errorMessage) | eval StartTime=round(strptime(Logon_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(Logoff_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=strftime(ElapsedTimeInSecs,"%H:%M:%S") | eval match=if(SourceFileDTLCount=TotalAPGLRecordsCountStaged,"Match","NotMatch") | rename Logon_Time as Timestamp | table Status JobType Response ArchivedFileName ElapsedTimeInSecs "Total Elapsed Time" correlationId | fields - ElapsedTimeInSecs priority match | where JobType!=" " | search Status="*" In the response field i want to show only.I dont care about the rest : PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 376 Company Code: 200 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 375 Company Code: 209 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 374 Company Code: 210 Operating Unit: US_AB_OU Status Response ArchiveFileName correlationId Success API: START: /v1/expense/extract After calling flow archive-ConcurExpenseFile-SubFlow Before calling flow archive-ConcurExpenseFile-SubFlow Calling s-ebs-api for AP Import process Concur AP/GL File/s Process Status Concur Ondemand Started Expense Extract Processing Starts Extract has no GL Lines to Import into Oracle PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 376 Company Code: 200 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 375 Company Code: 209 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL Extract V.3.02 - APAC ORACLE PAY AP Expense Report. Concur Batch ID: 374 Company Code: 210 Operating Unit: US_AB_OU PRD(SUCCESS): Concur AP/GL File/s Process Status - APAC Records Count Validation Passed EMEA_concur_expenses_ 49cde170-e057-11ee-8125-de5fb5
Hello all, SynApp: 3.0.3 OS: RHEL8 FIPS Splunk 9.0.x I configured this app and changed the index IPs in the local inputs.conf but it isn't working. Obviously there are a lot of things that coul... See more...
Hello all, SynApp: 3.0.3 OS: RHEL8 FIPS Splunk 9.0.x I configured this app and changed the index IPs in the local inputs.conf but it isn't working. Obviously there are a lot of things that could be wrong but I am really not seeing any app logging. Is there anyway to configure that? Does this app have a FIPS incompatibility?  The only thing I am finding are these logs in splunkd.log: ERROR ExecProcessor [1044046 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Synack/bin/assessment_data.py" obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ERROR ExecProcessor [1044046 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Synack/bin/vuln_data.py" json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Based on drop down list value how to change search string in each panel eg  for panel to load the search string will vary as below: index=test (source="/test/log/path/test1.log" $param1$ c="$p... See more...
Based on drop down list value how to change search string in each panel eg  for panel to load the search string will vary as below: index=test (source="/test/log/path/test1.log" $param1$ c="$param2$" $dropdownlistvalue1$ $dropdownlistvalue1$) As log path is different all my params vary. so how can i change index based of drop down list value?
How to keep specific events and discard the rest in props.conf and transforms.conf We are Receiving large amount of data which is onboarded to splunk via tar files. We dont require monitoring a... See more...
How to keep specific events and discard the rest in props.conf and transforms.conf We are Receiving large amount of data which is onboarded to splunk via tar files. We dont require monitoring all the events.,we would need only some events with some data to be monitored and rest all files/sources needed to sent into nullqueue. Please give me some insights on it. Thanks in advance.
Hello I could not find a clear answer.  We have a setup where we run an IIS server on a windows virtual machine. On the IIS server we run a PHP webshop that makes calls to different databases and e... See more...
Hello I could not find a clear answer.  We have a setup where we run an IIS server on a windows virtual machine. On the IIS server we run a PHP webshop that makes calls to different databases and external calls.   Does your Observerability system work out of the box on the PHP webshop, or is this not supported.   The reason for the question is that some monitoring solutions such as AppDynamics, and New Relic does not support that setup. The question is mainly to know if we should start moving the setup to a different tech stack or if can wait a little.
Hello Splunkers!! Below are the sample events I have in which I want to mask UserID field and Password field. There is no selected & interesting field is availble. I want to mask it from the raw eve... See more...
Hello Splunkers!! Below are the sample events I have in which I want to mask UserID field and Password field. There is no selected & interesting field is availble. I want to mask it from the raw event directly. Please suggest me solution from the UI by using rex mode command and second solution  by using the Props & transforms.conf from the backend .   Sample log:   <?xml version="1.0" encoding="UTF-8"?> <HostMessage><![CDATA[<?xml version="1.0" encoding="UTF-8" standalone="no"?><UserMasterRequest><MessageID>25255620</MessageID><MessageCreated>2024-04-05T07:00:55Z</MessageCreated><OpCode>CHANGEPWD</OpCode><UserId>pnkof123</UserId><Password>Summer123</Password><PasswordExpiry>2024-06-09</PasswordExpiry></UserMasterRequest>]]><original_header><IfcLogHostMessage xsi:schemaLocation="http://vanderlande.com/FM/Gtw/GtwLogging/V1/0/0 GtwLogging_V1.0.0.xsd"> <MessageId>25255620</MessageId> <MessageTimeStamp>2024-04-05T05:00:55Z</MessageTimeStamp> <SenderFmInstanceName>CMP_GTW</SenderFmInstanceName> <ReceiverFmInstanceName>FM_BPI</ReceiverFmInstanceName>   </IfcLogHostMessage></original_header></HostMessage>
Data Summary is not showing host at all even I already added UDP with ip address on port 514.
Hi Guys, In my scenario i need show error details for correlation id .There are field called tracePoint="EXCEPTION" and message field with PRD(ERROR): In some cases we have exception first after th... See more...
Hi Guys, In my scenario i need show error details for correlation id .There are field called tracePoint="EXCEPTION" and message field with PRD(ERROR): In some cases we have exception first after that the transaction got success.So at that time i want to ignore the transaction in my query.But its not ignoring the success correlationId in my result   index="mulesoft" applicationName="s-concur-api" environment=PRD (tracePoint="EXCEPTION" AND message!="*(SUCCESS)*")|transaction correlationId | rename timestamp as Timestamp correlationId as CorrelationId tracePoint as TracePoint content.ErrorType as Error content.errorType as errorType content.errorMsg as ErrorMsg content.ErrorMsg as errorMsg | eval ErrorType=if(isnull(Error),"Unknown",Error) | dedup CorrelationId |eval errorType=coalesce(Error,errorType)|eval Errormsg=coalesce(ErrorMsg,errorMsg) |table CorrelationId,Timestamp, applicationName, locationInfo.fileName, locationInfo.lineInFile, errorType, message,Errormsg | sort -Timestamp