All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a table that looks like this: Time Host User Activity 2021-01-01 01:02:01 ABC Test CommandLine: C:/Users/Cool/cool.exe File: cool.exe Hash: yr3f7r98jkfd7y38ykry... See more...
I have a table that looks like this: Time Host User Activity 2021-01-01 01:02:01 ABC Test CommandLine: C:/Users/Cool/cool.exe File: cool.exe Hash: yr3f7r98jkfd7y38ykry73 2021-01-01 01:02:02 ABC Test CommandLine: C:/Users/Lame/lame.exe File: lame.exe Hash: kf39utkuk0ulftu39uk30utk 2021-01-01 01:02:03 ABC Test CommandLine: C:/Users/Idk/idk.exe File: idx.exe Hash: 9l09uk8dtyjy4j4098tk48   The query I used to made the table looks something like this: host=ABC User=Test | rename host AS Host | eval Time=strftime(_time,"%Y-%m-%d %H:%M:%S"),Activity=mvappend("CommandLine: ".CommandLine," ","File: ".File," ","Hash: ".Hash) | table Time Host User Activity | dedup consecutive=true Activity sortby Time I am trying to use a drilldown to make it so when I click the hash in my Dashboard, it redirects me to a website. The issue I'm have is when I add the link and I click the Hash, instead of just giving me the hash: "9l09uk8dtyjy4j4098tk48", it will give me entire cell "Hash: 9l09uk8dtyjy4j4098tk48" which bugs out my URL.   Expected Output: https://website.com/9l09uk8dtyjy4j4098tk48   Actual Output: https://website.com/Hash: 9l09uk8dtyjy4j4098tk48   Another issue is no matter what cell I click they will all try to redirect me to the website:   Example: https://website.com/CommandLine: C:/Users/Lame/lame.exe   How can I make it so I can only click the hash value to get my expected output?
We've had good success auto-instrumenting an all-java kubernetes application with the cluster agent, but require the ability to use a custom APPDYNAMICS_AGENT_NODE_NAME. During manual instrumentation... See more...
We've had good success auto-instrumenting an all-java kubernetes application with the cluster agent, but require the ability to use a custom APPDYNAMICS_AGENT_NODE_NAME. During manual instrumentation, this property can be set as an ENV in the container the java agent is attaching to, but it's not clear from the documentation the way to do this from the cluster agent config.  https://docs.appdynamics.com/21.4/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/auto-instrument-applications-with-the-cluster-agent/auto-instrumentation-configuration I am utilizing the latest cluster agent operator and cluster agent, with a cluster-agent.yaml as follows: ``` apiVersion: appdynamics.com/v1alpha1 kind: Clusteragent metadata: name: k8s-cluster-agent namespace: appdynamics spec: appName: "demo" controllerUrl: "http://xxx.com:80" account: "xxx" logLevel: "DEBUG" # docker image info image: "docker.io/appdynamics/cluster-agent:latest" serviceAccountName: appdynamics-cluster-agent nsToMonitor: [demo] instrumentationMethod: Env nsToInstrumentRegex: demo appNameStrategy: manual defaultAppName: demo #defaultCustomConfig: "-Dappdynamics.agent.nodeName=manual-test" defaultEnv: JAVA_TOOL_OPTIONS resourcesToInstrument: [ Deployment, StatefulSet ] instrumentationRules: - namespaceRegex: demo language: java appName: demo # customAgentConfig: -Dappdynamics.agent.nodeName="manual-test" # customAgentConfig: -Dappdynamics.agent.nodeName=${APPDYNAMICS_AGENT_NODE_NAME} customAgentConfig: APPDYNAMICS_AGENT_NODE_NAME="manual-test" imageInfo: image: docker.io/appdynamics/java-agent:20.3.0 agentMountPath: /opt/appdynamics   With all 3 variations of the customAgentConfig above and  an APPDYNAMICS_AGENT_NODE_NAME in the target deployment set. Any help would be much appreciated
Hello, I need to find a way to use another field for _Time on a single query (I don't want to change props just for 1 query) Sample Time:    2021-06-19T04:15:59.845Z     I've tried several str... See more...
Hello, I need to find a way to use another field for _Time on a single query (I don't want to change props just for 1 query) Sample Time:    2021-06-19T04:15:59.845Z     I've tried several strptime I've seen in other questions but to no avail. I did get one to format previously for a table format using the following     | eval SeenTimeStringConverted=strftime(strptime(Time,"%Y-%m-%dT%H:%M:%S.%6N"),"%m/%d/%Y %H:%M:%S %p")     Here's my query I've been working on.     sourcetype="aws:cloudwatchlogs:securityhub" "CIS" "detail.findings{}.Compliance.Status"!=NULL | rename "detail.findings{}.FirstObservedAt" as Time | eval _time=strptime(Time,"%Y-%m-%dT%H:%M:%S.%6N") | timechart count by "detail.findings{}.Compliance.Status"        
Howdy fellow Splunkers! I have tried to find a previous article but I must be missing it if there is one. I need help as I am doing some app/add-on updates for the first time and hit a roadblock. I... See more...
Howdy fellow Splunkers! I have tried to find a previous article but I must be missing it if there is one. I need help as I am doing some app/add-on updates for the first time and hit a roadblock. I have an app installed currently and the app folder in deployment-apps is titled: forescout-app-for-splunk_291 and the new one is called forescout-app-for-splunk. They took off the version number.  I was not the one who installed the first one so I'm not sure what their thinking was. I am using a Deployment Server and can't figure out how to get it to replace the version one with the non version one. If I just install it like a new app, then both are on the SH and I am afraid of losing any config from the first app. Oh man, I really hope that made sense. Any help is greatly appreciated!
Hello all, I've recently been tasked with alerting our support email when a user in Salesforce is locked out.  The alert triggers when a User's LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT".  However,... See more...
Hello all, I've recently been tasked with alerting our support email when a user in Salesforce is locked out.  The alert triggers when a User's LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT".  However, this alert keeps getting triggered if an admin doesn't unlock the User account right away.  Is there any way to limit the alert being sent out if the Usernames are identical as the previous alert? index="salesforce" EVENT_TYPE="Login" LOGIN_STATUS=* [search EVENT_TYPE="Login" LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT" | stats count by USER_ID | table USER_ID] | stats latest(LOGIN_STATUS) AS LOGIN_STATUS latest(USER_NAME) AS USER_NAME latest(SOURCE_IP) AS SOURCE_IP latest(UserAccountId) AS "Account Id" latest(USER_TYPE) AS "User Type" latest(TIMESTAMP) AS "Time stamp" by USER_ID | where LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT"
Hi all, I'm working to correlate a series of events. These events are all part of a logging process of a separate application. What seems to be common is a UUID. The following searches produce wha... See more...
Hi all, I'm working to correlate a series of events. These events are all part of a logging process of a separate application. What seems to be common is a UUID. The following searches produce what I'd like individually: for the first timestamp associated with the start of the process (there are multiple processes running for each UUID, but the ask is to extract the first / last timestamp) index=###### sourcetype=### "process() - start" | rex "^(?:[^ \n]* ){2}(?P<UUID>[^,]+)" | eval startTime = strftime(_time, "%Y-%d-%m %H:%M:%S") | stats earliest(startTime) as startingTimeStamp by UUID index=###### sourcetype=### "process() - end" | rex "^(?:[^ \n]* ){2}(?P<UUID>[^,]+)" | eval endTime = strftime(_time, "%Y-%d-%m %H:%M:%S") | stats latest(endTime) as endingTimeStamp by UUID I'm trying to learn how to use join to connect these events by UUID. This SPL returns a table that has the earliest and latest of startTime, rather than the earliest(startTime) and latest(endTime).  index=###### sourcetype=### "process() - start" | rex "^(?:[^ \n]* ){2}(?P<UUID>[^,]+)" | eval startTime = strftime(_time, "%Y-%d-%m %H:%M:%S") | stats earliest(startTime) as startingTimeStamp by UUID | join UUID type=left [ search index=###### sourcetype=### "process() - end" | rex "^(?:[^ \n]* ){2}(?P<UUID>[^,]+)" | eval endTime = strftime(_time, "%Y-%d-%m %H:%M:%S") | stats latest(endTime) as endingTimeStamp by UUID ] Is join the appropriate function to use here? I'm reading in coalesce and append as well, but from my understanding append does not fit. Another piece is that the UUID is not a field extraction, but rather a regex, so I'm unsure how the join would be able to function if the subsearch has no knowledge of UUID until it runs and performs the rex. 
Am preparing a report & need to estimate amount of data from an average say Microsoft or Linux (RHEL) server into Splunk on daily basis please. Just a rough estimate. Say the data includes logs & DBs... See more...
Am preparing a report & need to estimate amount of data from an average say Microsoft or Linux (RHEL) server into Splunk on daily basis please. Just a rough estimate. Say the data includes logs & DBs. Thanks a million
I created a dashboard in dashboard studio and want to setup an alert action that when a certain metric passes a threshold a PDF is emailed to a group of people. In my head the alert action is a a scr... See more...
I created a dashboard in dashboard studio and want to setup an alert action that when a certain metric passes a threshold a PDF is emailed to a group of people. In my head the alert action is a a script that does an API call to generate a PDF and then emails it as an attachment to the group.  However when I use this API command the output is blank. It works fine with a standard dashboard. curl -X POST -u <user>:<password> -k 'https://localhost:8089/services/pdfgen/render?input-dashboard=my_dashboard&namespace=search&paper-size=a4-landscape '' My question is has the API not been updated to support exporting dashboards made in Dashboard Studio or am I missing an additional parameter?
I need help breaking the following data into segments. The data is currently lumped together. I have been working with the Splunk Add Data feature to attempt to parse the data correctly 07400 16:31... See more...
I need help breaking the following data into segments. The data is currently lumped together. I have been working with the Splunk Add Data feature to attempt to parse the data correctly 07400 16:31:30.320 Processing 51 log entries in <servername.615494dd0000.dblog> from servername 07784 16:31:30.492 Processing 51 log entries in <servername.615494e00000.dblog> from servername 07400 16:31:30.633 DBLog Summary: time=313ms (total=51, mean time=6.137/rec), Message:(c=32, t=297) Content:(c=5, t=0) NodeStats:(c=1, t=0) VirusScannerStats:(c=13, t=0) 07784 16:31:30.987 DBLog Summary: time=484ms (total=51, mean time=9.490/rec), Message:(c=35, t=469) Content:(c=4, t=0) NodeStats:(c=1, t=0) VirusScannerStats:(c=11, t=0) 07784 16:31:31.213 Processing 51 log entries in <servername.615494e00000.dblog> from servername 07784 16:31:31.278 DBLog Summary: time=62ms (total=51, mean time=1.216/rec), Message:(c=31, t=31) Content:(c=9, t=16) NodeStats:(c=1, t=0) VirusScannerStats:(c=10, t=0) 07784 16:31:31.691 Processing 51 log entries in <servername.615494e20000.dblog> from servername 07400 16:31:31.739 Rule Profiler: writing queued records to the database. 07400 16:31:31.745 Rule Profiler: finished writing queued records to the database. Record count: 53 07784 16:31:31.776 DBLog Summary: time=93ms (total=51, mean time=1.824/rec), Message:(c=31, t=78) Content:(c=6, t=0) NodeStats:(c=2, t=0) VirusScannerStats:(c=12, t=0) In Regex tester I have used the regex (\d{5}\s+\d{2}:\d{2}:\d{2}.\d{3}\s+Processing 51) to correctly capture where the data needs to be on a new line. I need the event data parsed to look as follows: 07400 16:31:30.320 Processing 51 log entries in <servername.615494dd0000.dblog> from servername 07784 16:31:30.492 Processing 51 log entries in <servername.615494e00000.dblog> from servername 07400 16:31:30.633 DBLog Summary: time=313ms (total=51, mean time=6.137/rec), Message:(c=32, t=297) Content:(c=5, t=0) NodeStats:(c=1, t=0) VirusScannerStats:(c=13, t=0) 07784 16:31:30.987 DBLog Summary: time=484ms (total=51, mean time=9.490/rec), Message:(c=35, t=469) Content:(c=4, t=0) NodeStats:(c=1, t=0) VirusScannerStats:(c=11, t=0) 07784 16:31:31.213 Processing 51 log entries in <servername.615494e00000.dblog> from servername 07784 16:31:31.278 DBLog Summary: time=62ms (total=51, mean time=1.216/rec), Message:(c=31, t=31) Content:(c=9, t=16) NodeStats:(c=1, t=0) VirusScannerStats:(c=10, t=0) 07784 16:31:31.691 Processing 51 log entries in <servername.615494e20000.dblog> from servername 07400 16:31:31.739 Rule Profiler: writing queued records to the database. 07400 16:31:31.745 Rule Profiler: finished writing queued records to the database. Record count: 53 07784 16:31:31.776 DBLog Summary: time=93ms (total=51, mean time=1.824/rec), Message:(c=31, t=78) Content:(c=6, t=0) NodeStats:(c=2, t=0) VirusScannerStats:(c=12, t=0) I have tried LINE_BREAKER=([\r\n]+), BREAK_ONLY_BEFORE, MUST_BREAK_AFTER, MUST_NOT_BREAK_BEFORE along with using the regex shown above in the Splunk Wizard will not break the data as needed. Thanks
Hello! I'm hoping some Splunk masters can help me with what I thought would have been an easy task but I'm very much stuck on it.  How can I group similar starting results into one result within the ... See more...
Hello! I'm hoping some Splunk masters can help me with what I thought would have been an easy task but I'm very much stuck on it.  How can I group similar starting results into one result within the same field? I have a field that spits out results formatted like this: index=prod_side sourcetype=prod_one  fail_code=* | table fail_code Results: fail_code c-tr [213] c-tr [893] c-tr [309] e-hw [gold] e-hw [silver] e-hw [bronze] e-pr [vbix] e-pr [zbix] g-tr [345] g-tr [123] d-st [(intel.arm64) T 123 123] d-st [(intel.arm64) T 456 456] I want to group results and count the total for each by the 4 characters before the brackets [ ] begin. The content in the brackets is not relevant to me and can be done away with in the results table: fail_code_name     value_count c-tr                                  3 e-hw                               3 e-pr                                 2 g-tr                                  2 d-st                                 2
I am trying to create a dashboard with Dynamic Dropdowns using the new JSON of Dashboard Studio. I'm not great at the XML of the Classic Dashboards, but there are a good bit of videos/sites that help... See more...
I am trying to create a dashboard with Dynamic Dropdowns using the new JSON of Dashboard Studio. I'm not great at the XML of the Classic Dashboards, but there are a good bit of videos/sites that help show how to do things and why. Dashboard Studio appears new enough that it doesn't have much for a rookie like me. I'd like something like https://www.youtube.com/watch?v=BJm04grvvf8 but for Dashboard Studio. Anyone know of anything? I am coming up with nothing. I just have some documentation that I'm not great at reading and understanding at https://docs.splunk.com/Documentation/SplunkCloud/latest/DashStudio/inputs found from https://community.splunk.com/t5/Dashboards-Visualizations/Dashboard-Studio-Dynamic-loading-a-dropdown-list/m-p/556552#M38690.
Hello, I have a CSV file in this form :   2021-08-30 15:45:32;MOZILLA;j.dupont;FR6741557ERF;1.1.1.1;CONNEXION;; 2021-08-30 15:45:24;MOZILLA;j.dupont;FR6741557ERF;1.1.1.1;STATUS;;BDD 2021-08-30 15:... See more...
Hello, I have a CSV file in this form :   2021-08-30 15:45:32;MOZILLA;j.dupont;FR6741557ERF;1.1.1.1;CONNEXION;; 2021-08-30 15:45:24;MOZILLA;j.dupont;FR6741557ERF;1.1.1.1;STATUS;;BDD 2021-08-30 15:45:16;MOZILLA;j.dupontFR6741557ERF;1.1.1.1;START;App_start;WEB   Corresponding to these 8 fields : date,application,user,host,ip,type,detail,module I have 2 questions : How can I extract these fields ? How can I extract field at search-time (to be able to be retroactive on old logs) ? This my actuals props.conf and transforms.conf deployed on Search Head + Indexers and the inputs.conf file on the Universal Forwarder : props.conf   [csvlogs] disabled = false TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 19 LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false KV_MODE = none REPORT-fieldsextraction = logs_fields   transforms.conf   [logs_fields] DELIMS = ";" FIELDS = date,application,user,hostname,ip,type,detail,module KEEP_EMPTY_VALS = true   inputs.conf   [Monitor://D:\repository\logs.csv] disabled = false sourcetype=csvlogs index=logs_index1   Do you have solutions ?
Has anyone configures Splunk to collect logs from Cloud.gov? Please share how it is done so. Thanks a million.
I'm new to Splunk and I'm trying to do something that is probably basic but I haven't been able to figure out how to do it.   I have a log in Splunk which contains an http_query along the lines of:... See more...
I'm new to Splunk and I'm trying to do something that is probably basic but I haven't been able to figure out how to do it.   I have a log in Splunk which contains an http_query along the lines of: ``` my_object[prop1]=someVal&my_object[prop2]=someOtherVal ``` I'm trying to use a timechart to inspect these values. I've tried: `timechart count by my_object[prop1]` which tells me prop1 is undefined. Then also tried `timechart count by my_object.prop1`  which gives me a time series with NULL everywhere. How can I do this?
I have the following query and I am using it in a dashboard to show the errors categorized.  index=myindex sourcetype=mysource_type:app | spath message | regex message="^.*error creating account.*$$... See more...
I have the following query and I am using it in a dashboard to show the errors categorized.  index=myindex sourcetype=mysource_type:app | spath message | regex message="^.*error creating account.*$$"|top message Now, this is working, but it is showing the complete messages. The error messages have the following format most of the time: message: Log: "error creating account {\"status\":\"error\",\"message\":\"Error while creating account, 500 - Internal Server Error\"}" Now when the stats table is displayed. I would like to show only the message part from this error message, that is it only needs to show Error while creating an account, 500 - Internal Server Error.  It will be very much helpful someone can point out how I can do this?
Hello, i've put two timecharts on top of each other to compare their events by time. Both timecharts are using the same time range and span. The top timechart has many data points  whereas the bo... See more...
Hello, i've put two timecharts on top of each other to compare their events by time. Both timecharts are using the same time range and span. The top timechart has many data points  whereas the bottom has just a few. How can I show the same time range on the x-axis in both timecharts?  
When we create new alerts for testing, we have the correlation search create the notable event with a status of "Testing". This way, any alerts that fire go into Incident Review with a status of "Tes... See more...
When we create new alerts for testing, we have the correlation search create the notable event with a status of "Testing". This way, any alerts that fire go into Incident Review with a status of "Testing". The problem is, when we are ready to move them out of testing, we change the notable event configuration to have a status of "New". But when we change that configuration, it changes all of the old notable events that fired with a "Testing" status to "New" which throws off metrics because suddenly there's an influx of notable events that show up as being "New" even though they were previously in status "Testing." Is there a way to change the default status for notable events and have it NOT change the old ones that previously fired with the old default status?
Need help with a regex to  show me if the input field has spaces at the leading or ending of the string OR if it contains " <doublequotes> anywhere in the string. <change> <eval token="validation... See more...
Need help with a regex to  show me if the input field has spaces at the leading or ending of the string OR if it contains " <doublequotes> anywhere in the string. <change> <eval token="validationResult" >if(match(value, "[^\"a-zA-Z0-9]"), "padded space or doublequote identified", "All Good"</eval> </change>
I noticed in the hardening standards it states,  "Disable automatic chart recovery in the analytics workspace. See Charts in the Splunk Analytics Workspace in the Splunk Analytics Workspace Using th... See more...
I noticed in the hardening standards it states,  "Disable automatic chart recovery in the analytics workspace. See Charts in the Splunk Analytics Workspace in the Splunk Analytics Workspace Using the Splunk Analytics Workspace manual." I looked at the link, but did not find any explanation on what exactly risk it poses to keep that feature enabled. Hence, seeking some clarification.
Hello dears, How can i sort these field values ? Field = "port" 0/1/0/2/ 0/8/0/7/ 0/2/0/3/ 0/5/0/2/ 0/6/0/3/ 0/16/0/2 0/18/0/6 0/16/0/5 0/4/0/2/ 0/6/0/2/ 0/18/0/2 0/12/0/4 0/3/0/7/   ... See more...
Hello dears, How can i sort these field values ? Field = "port" 0/1/0/2/ 0/8/0/7/ 0/2/0/3/ 0/5/0/2/ 0/6/0/3/ 0/16/0/2 0/18/0/6 0/16/0/5 0/4/0/2/ 0/6/0/2/ 0/18/0/2 0/12/0/4 0/3/0/7/   Regards.