All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, We are using Microsoft SQL Server as a Database for one of the Applications. For the Microsoft SQL Server by default, we are able to see the Basic Hardware metrics like CPU Usage, Memory Usa... See more...
Hi, We are using Microsoft SQL Server as a Database for one of the Applications. For the Microsoft SQL Server by default, we are able to see the Basic Hardware metrics like CPU Usage, Memory Usage, and Disk I/O. Is it possible to get the Disk Usage also by using DB Agents? Regards, Madhusri R
I want to get metrics from multiple index/sourcetype combinations - have been using the append clause and subquery to do it but need to process a lot of events and hit the limitations of subqueries a... See more...
I want to get metrics from multiple index/sourcetype combinations - have been using the append clause and subquery to do it but need to process a lot of events and hit the limitations of subqueries and although i get all the data from the primary query the appends get truncated.   Im sure there is an easy way of doing this and its what splunk is meant to do but cant work out how to cater for the different manipulation that needs to be done depending on the index and sourcetype. The follow is a relatively simple one but i have more complex queries which need to calculate rates from absolute values etc.   So basically have 3 queries ( one that needs a join so i can do some calculations) keep _time host and the metric I want and then do the visualisation.   index=windows sourcetype=PerfmonMk:Memory host IN(host1,host2,host3) | join type=outer host [ search index=windows sourcetype=WMI:ComputerSystem host IN(host1,host2,host3) earliest=-45d latest=now() | stats last(TotalPhysicalMemory) as TPM by host | eval TPM=TPM/1024/1024] | eval winmem=((TPM-Available_MBytes)/TPM)*100 | fields _time host mem |append [search index=linux sourcetype=vmstat host IN(host4,host5,host6) | where isnotnull(memUsedPct) | eval linmem=memUsedPct | fields _time host mem] |append [ search index=unix sourcetype="nmon-MEMNEW" host IN(host7,host8,host9) | where isnotnull("Free%") | eval aixmem=100-'Free%' | fields _time host mem] | eval host=upper(host) | timechart limit=0 span=1h perc95(mem) as Memory_Utilisation by host
Hi guys, I am very new to Splunk and this is only my first week using it. What I am wanting to do is view the performance logs of my own local machine and then put it into a dashboard. It would also... See more...
Hi guys, I am very new to Splunk and this is only my first week using it. What I am wanting to do is view the performance logs of my own local machine and then put it into a dashboard. It would also be good to be able to get the number of times I have logged into my laptop if that is possible. The questions is, Do I need to use a universal forwarder to be able to do all this ? I am not sure, from what I have read online the universal forwarder is used for remote machines but because its local would I need to use one. I can imagine this being a very noobie question but need the help if someone is able to.   Thank You
Hi, I have a query which I am not sure why its not working, Assume I have the following JSON record, which has been extracted at index-time index: network sourcetype: devices record: { "deviceId... See more...
Hi, I have a query which I am not sure why its not working, Assume I have the following JSON record, which has been extracted at index-time index: network sourcetype: devices record: { "deviceId" : 1234, "hostName": "Router1} 1. index=network sourcetype=devices deviceId=1234 => works as expected 2. index=network TERM(sourcetype::devices) => works as expected 3. index=network TERM(sourcetype::devices) deviceId=1234 => Fails, returns 0 records 4. index=network TERM(sourcetype::devices) earliest=-7d@d => Fails, returns 0 records 5. index=network sourcetype::devices deviceId=1234 => works as expected 6. index=network sourcetype::devices deviceId::1234 => works as expected 7. index=network sourcetype::devices deviceId::1234 earliest=-7d@d => works as expected The real question is, why do queries 3 and 4 fail, when the others work, especially when I can see that query 2 works and returns the correct data. What impact does TERM() have in the process flow, such that earliest and = make it fail ? cheers -brett
Hi, I tried to find this in the docs, but no luck, more than happy to RTM if someone has the link. On the black menu, top right, there is Help, with Sub Menus of: ... Tutorials Help with this ... See more...
Hi, I tried to find this in the docs, but no luck, more than happy to RTM if someone has the link. On the black menu, top right, there is Help, with Sub Menus of: ... Tutorials Help with this page File a bug ... I want to change where these either point to, or want to be able to leverage the link they point to; for example Help with this page : Where do I put my own docs so they will be used ? File a bug: I want this to point to my Jira Tutorials : I want this to point to a wiki or sharepoint or ? Cheers -brett
So I am very new to Splunk and I have just started using it. What I want to do is be able to view my own laptops operating system file logs and performance data. What I have been doing is logging ont... See more...
So I am very new to Splunk and I have just started using it. What I want to do is be able to view my own laptops operating system file logs and performance data. What I have been doing is logging onto my splunk and then selecting the "add data" button. From there I select the "monitor" button. For example I have chosen to monitor  my local events log but for some reason when I try to search anything I get nothing so something is wrong and I dont know what.   Please help
Here's an example of some error logs that simply show which app reported an error and which country: _time(s) sourcetype country 0 app1 US 1 app1 DE 2 app2 DE 65 app2 US ... See more...
Here's an example of some error logs that simply show which app reported an error and which country: _time(s) sourcetype country 0 app1 US 1 app1 DE 2 app2 DE 65 app2 US 66 app2 US 67 app1 DE   Here's the timechart I would like to retrieve(span=1m): _time app1 app2 2021-09-30 00:00:00 {"US": 1, "DE": 1} {"DE": 1} 2021-09-30 00:01:00 {"DE": 1} {"US": 2}   Is this, or something similar, possible?
I have a multi-site cluster, and am planning on decommissioning one to transform it into a single-site cluster. Looking over these two guides: https://docs.splunk.com/Documentation/Splunk/8.0.2/I... See more...
I have a multi-site cluster, and am planning on decommissioning one to transform it into a single-site cluster. Looking over these two guides: https://docs.splunk.com/Documentation/Splunk/8.0.2/Indexer/Decommissionasite https://docs.splunk.com/Documentation/Splunk/8.1.2/Indexer/Converttosinglesite And trying to see how to do both, preferably at the same time. When converting to a single-site, it states to stop the entire cluster, update the configurations, then start the cluster back up. Is there any issue with doing the configurations changes necessary for decommissioning the old site while everything is offline, and only bringing up the remaining site? Basically, current plan is: Stop all nodes Update the Manager Configs Set multi-site to false Set single site search/rep factors Remove site attribute Remove available_sites attribute/site mappings Update Search Head Configs Set multi-site to false Remove site attribute Start nodes that are remaining from new site Would this work, or would it cause conflicts in replication somehow? Do I need to use Splunk commands on the cluster manager to remove the old indexers?
I have a table that looks like this: Time Host User Activity 2021-01-01 01:02:01 ABC Test CommandLine: C:/Users/Cool/cool.exe File: cool.exe Hash: yr3f7r98jkfd7y38ykry... See more...
I have a table that looks like this: Time Host User Activity 2021-01-01 01:02:01 ABC Test CommandLine: C:/Users/Cool/cool.exe File: cool.exe Hash: yr3f7r98jkfd7y38ykry73 2021-01-01 01:02:02 ABC Test CommandLine: C:/Users/Lame/lame.exe File: lame.exe Hash: kf39utkuk0ulftu39uk30utk 2021-01-01 01:02:03 ABC Test CommandLine: C:/Users/Idk/idk.exe File: idx.exe Hash: 9l09uk8dtyjy4j4098tk48   The query I used to made the table looks something like this: host=ABC User=Test | rename host AS Host | eval Time=strftime(_time,"%Y-%m-%d %H:%M:%S"),Activity=mvappend("CommandLine: ".CommandLine," ","File: ".File," ","Hash: ".Hash) | table Time Host User Activity | dedup consecutive=true Activity sortby Time I am trying to use a drilldown to make it so when I click the hash in my Dashboard, it redirects me to a website. The issue I'm have is when I add the link and I click the Hash, instead of just giving me the hash: "9l09uk8dtyjy4j4098tk48", it will give me entire cell "Hash: 9l09uk8dtyjy4j4098tk48" which bugs out my URL.   Expected Output: https://website.com/9l09uk8dtyjy4j4098tk48   Actual Output: https://website.com/Hash: 9l09uk8dtyjy4j4098tk48   Another issue is no matter what cell I click they will all try to redirect me to the website:   Example: https://website.com/CommandLine: C:/Users/Lame/lame.exe   How can I make it so I can only click the hash value to get my expected output?
We've had good success auto-instrumenting an all-java kubernetes application with the cluster agent, but require the ability to use a custom APPDYNAMICS_AGENT_NODE_NAME. During manual instrumentation... See more...
We've had good success auto-instrumenting an all-java kubernetes application with the cluster agent, but require the ability to use a custom APPDYNAMICS_AGENT_NODE_NAME. During manual instrumentation, this property can be set as an ENV in the container the java agent is attaching to, but it's not clear from the documentation the way to do this from the cluster agent config.  https://docs.appdynamics.com/21.4/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/auto-instrument-applications-with-the-cluster-agent/auto-instrumentation-configuration I am utilizing the latest cluster agent operator and cluster agent, with a cluster-agent.yaml as follows: ``` apiVersion: appdynamics.com/v1alpha1 kind: Clusteragent metadata: name: k8s-cluster-agent namespace: appdynamics spec: appName: "demo" controllerUrl: "http://xxx.com:80" account: "xxx" logLevel: "DEBUG" # docker image info image: "docker.io/appdynamics/cluster-agent:latest" serviceAccountName: appdynamics-cluster-agent nsToMonitor: [demo] instrumentationMethod: Env nsToInstrumentRegex: demo appNameStrategy: manual defaultAppName: demo #defaultCustomConfig: "-Dappdynamics.agent.nodeName=manual-test" defaultEnv: JAVA_TOOL_OPTIONS resourcesToInstrument: [ Deployment, StatefulSet ] instrumentationRules: - namespaceRegex: demo language: java appName: demo # customAgentConfig: -Dappdynamics.agent.nodeName="manual-test" # customAgentConfig: -Dappdynamics.agent.nodeName=${APPDYNAMICS_AGENT_NODE_NAME} customAgentConfig: APPDYNAMICS_AGENT_NODE_NAME="manual-test" imageInfo: image: docker.io/appdynamics/java-agent:20.3.0 agentMountPath: /opt/appdynamics   With all 3 variations of the customAgentConfig above and  an APPDYNAMICS_AGENT_NODE_NAME in the target deployment set. Any help would be much appreciated
Hello, I need to find a way to use another field for _Time on a single query (I don't want to change props just for 1 query) Sample Time:    2021-06-19T04:15:59.845Z     I've tried several str... See more...
Hello, I need to find a way to use another field for _Time on a single query (I don't want to change props just for 1 query) Sample Time:    2021-06-19T04:15:59.845Z     I've tried several strptime I've seen in other questions but to no avail. I did get one to format previously for a table format using the following     | eval SeenTimeStringConverted=strftime(strptime(Time,"%Y-%m-%dT%H:%M:%S.%6N"),"%m/%d/%Y %H:%M:%S %p")     Here's my query I've been working on.     sourcetype="aws:cloudwatchlogs:securityhub" "CIS" "detail.findings{}.Compliance.Status"!=NULL | rename "detail.findings{}.FirstObservedAt" as Time | eval _time=strptime(Time,"%Y-%m-%dT%H:%M:%S.%6N") | timechart count by "detail.findings{}.Compliance.Status"        
Howdy fellow Splunkers! I have tried to find a previous article but I must be missing it if there is one. I need help as I am doing some app/add-on updates for the first time and hit a roadblock. I... See more...
Howdy fellow Splunkers! I have tried to find a previous article but I must be missing it if there is one. I need help as I am doing some app/add-on updates for the first time and hit a roadblock. I have an app installed currently and the app folder in deployment-apps is titled: forescout-app-for-splunk_291 and the new one is called forescout-app-for-splunk. They took off the version number.  I was not the one who installed the first one so I'm not sure what their thinking was. I am using a Deployment Server and can't figure out how to get it to replace the version one with the non version one. If I just install it like a new app, then both are on the SH and I am afraid of losing any config from the first app. Oh man, I really hope that made sense. Any help is greatly appreciated!
Hello all, I've recently been tasked with alerting our support email when a user in Salesforce is locked out.  The alert triggers when a User's LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT".  However,... See more...
Hello all, I've recently been tasked with alerting our support email when a user in Salesforce is locked out.  The alert triggers when a User's LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT".  However, this alert keeps getting triggered if an admin doesn't unlock the User account right away.  Is there any way to limit the alert being sent out if the Usernames are identical as the previous alert? index="salesforce" EVENT_TYPE="Login" LOGIN_STATUS=* [search EVENT_TYPE="Login" LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT" | stats count by USER_ID | table USER_ID] | stats latest(LOGIN_STATUS) AS LOGIN_STATUS latest(USER_NAME) AS USER_NAME latest(SOURCE_IP) AS SOURCE_IP latest(UserAccountId) AS "Account Id" latest(USER_TYPE) AS "User Type" latest(TIMESTAMP) AS "Time stamp" by USER_ID | where LOGIN_STATUS="LOGIN_ERROR_PASSWORD_LOCKOUT"
Hi all, I'm working to correlate a series of events. These events are all part of a logging process of a separate application. What seems to be common is a UUID. The following searches produce wha... See more...
Hi all, I'm working to correlate a series of events. These events are all part of a logging process of a separate application. What seems to be common is a UUID. The following searches produce what I'd like individually: for the first timestamp associated with the start of the process (there are multiple processes running for each UUID, but the ask is to extract the first / last timestamp) index=###### sourcetype=### "process() - start" | rex "^(?:[^ \n]* ){2}(?P<UUID>[^,]+)" | eval startTime = strftime(_time, "%Y-%d-%m %H:%M:%S") | stats earliest(startTime) as startingTimeStamp by UUID index=###### sourcetype=### "process() - end" | rex "^(?:[^ \n]* ){2}(?P<UUID>[^,]+)" | eval endTime = strftime(_time, "%Y-%d-%m %H:%M:%S") | stats latest(endTime) as endingTimeStamp by UUID I'm trying to learn how to use join to connect these events by UUID. This SPL returns a table that has the earliest and latest of startTime, rather than the earliest(startTime) and latest(endTime).  index=###### sourcetype=### "process() - start" | rex "^(?:[^ \n]* ){2}(?P<UUID>[^,]+)" | eval startTime = strftime(_time, "%Y-%d-%m %H:%M:%S") | stats earliest(startTime) as startingTimeStamp by UUID | join UUID type=left [ search index=###### sourcetype=### "process() - end" | rex "^(?:[^ \n]* ){2}(?P<UUID>[^,]+)" | eval endTime = strftime(_time, "%Y-%d-%m %H:%M:%S") | stats latest(endTime) as endingTimeStamp by UUID ] Is join the appropriate function to use here? I'm reading in coalesce and append as well, but from my understanding append does not fit. Another piece is that the UUID is not a field extraction, but rather a regex, so I'm unsure how the join would be able to function if the subsearch has no knowledge of UUID until it runs and performs the rex. 
Am preparing a report & need to estimate amount of data from an average say Microsoft or Linux (RHEL) server into Splunk on daily basis please. Just a rough estimate. Say the data includes logs & DBs... See more...
Am preparing a report & need to estimate amount of data from an average say Microsoft or Linux (RHEL) server into Splunk on daily basis please. Just a rough estimate. Say the data includes logs & DBs. Thanks a million
I created a dashboard in dashboard studio and want to setup an alert action that when a certain metric passes a threshold a PDF is emailed to a group of people. In my head the alert action is a a scr... See more...
I created a dashboard in dashboard studio and want to setup an alert action that when a certain metric passes a threshold a PDF is emailed to a group of people. In my head the alert action is a a script that does an API call to generate a PDF and then emails it as an attachment to the group.  However when I use this API command the output is blank. It works fine with a standard dashboard. curl -X POST -u <user>:<password> -k 'https://localhost:8089/services/pdfgen/render?input-dashboard=my_dashboard&namespace=search&paper-size=a4-landscape '' My question is has the API not been updated to support exporting dashboards made in Dashboard Studio or am I missing an additional parameter?
I need help breaking the following data into segments. The data is currently lumped together. I have been working with the Splunk Add Data feature to attempt to parse the data correctly 07400 16:31... See more...
I need help breaking the following data into segments. The data is currently lumped together. I have been working with the Splunk Add Data feature to attempt to parse the data correctly 07400 16:31:30.320 Processing 51 log entries in <servername.615494dd0000.dblog> from servername 07784 16:31:30.492 Processing 51 log entries in <servername.615494e00000.dblog> from servername 07400 16:31:30.633 DBLog Summary: time=313ms (total=51, mean time=6.137/rec), Message:(c=32, t=297) Content:(c=5, t=0) NodeStats:(c=1, t=0) VirusScannerStats:(c=13, t=0) 07784 16:31:30.987 DBLog Summary: time=484ms (total=51, mean time=9.490/rec), Message:(c=35, t=469) Content:(c=4, t=0) NodeStats:(c=1, t=0) VirusScannerStats:(c=11, t=0) 07784 16:31:31.213 Processing 51 log entries in <servername.615494e00000.dblog> from servername 07784 16:31:31.278 DBLog Summary: time=62ms (total=51, mean time=1.216/rec), Message:(c=31, t=31) Content:(c=9, t=16) NodeStats:(c=1, t=0) VirusScannerStats:(c=10, t=0) 07784 16:31:31.691 Processing 51 log entries in <servername.615494e20000.dblog> from servername 07400 16:31:31.739 Rule Profiler: writing queued records to the database. 07400 16:31:31.745 Rule Profiler: finished writing queued records to the database. Record count: 53 07784 16:31:31.776 DBLog Summary: time=93ms (total=51, mean time=1.824/rec), Message:(c=31, t=78) Content:(c=6, t=0) NodeStats:(c=2, t=0) VirusScannerStats:(c=12, t=0) In Regex tester I have used the regex (\d{5}\s+\d{2}:\d{2}:\d{2}.\d{3}\s+Processing 51) to correctly capture where the data needs to be on a new line. I need the event data parsed to look as follows: 07400 16:31:30.320 Processing 51 log entries in <servername.615494dd0000.dblog> from servername 07784 16:31:30.492 Processing 51 log entries in <servername.615494e00000.dblog> from servername 07400 16:31:30.633 DBLog Summary: time=313ms (total=51, mean time=6.137/rec), Message:(c=32, t=297) Content:(c=5, t=0) NodeStats:(c=1, t=0) VirusScannerStats:(c=13, t=0) 07784 16:31:30.987 DBLog Summary: time=484ms (total=51, mean time=9.490/rec), Message:(c=35, t=469) Content:(c=4, t=0) NodeStats:(c=1, t=0) VirusScannerStats:(c=11, t=0) 07784 16:31:31.213 Processing 51 log entries in <servername.615494e00000.dblog> from servername 07784 16:31:31.278 DBLog Summary: time=62ms (total=51, mean time=1.216/rec), Message:(c=31, t=31) Content:(c=9, t=16) NodeStats:(c=1, t=0) VirusScannerStats:(c=10, t=0) 07784 16:31:31.691 Processing 51 log entries in <servername.615494e20000.dblog> from servername 07400 16:31:31.739 Rule Profiler: writing queued records to the database. 07400 16:31:31.745 Rule Profiler: finished writing queued records to the database. Record count: 53 07784 16:31:31.776 DBLog Summary: time=93ms (total=51, mean time=1.824/rec), Message:(c=31, t=78) Content:(c=6, t=0) NodeStats:(c=2, t=0) VirusScannerStats:(c=12, t=0) I have tried LINE_BREAKER=([\r\n]+), BREAK_ONLY_BEFORE, MUST_BREAK_AFTER, MUST_NOT_BREAK_BEFORE along with using the regex shown above in the Splunk Wizard will not break the data as needed. Thanks
Hello! I'm hoping some Splunk masters can help me with what I thought would have been an easy task but I'm very much stuck on it.  How can I group similar starting results into one result within the ... See more...
Hello! I'm hoping some Splunk masters can help me with what I thought would have been an easy task but I'm very much stuck on it.  How can I group similar starting results into one result within the same field? I have a field that spits out results formatted like this: index=prod_side sourcetype=prod_one  fail_code=* | table fail_code Results: fail_code c-tr [213] c-tr [893] c-tr [309] e-hw [gold] e-hw [silver] e-hw [bronze] e-pr [vbix] e-pr [zbix] g-tr [345] g-tr [123] d-st [(intel.arm64) T 123 123] d-st [(intel.arm64) T 456 456] I want to group results and count the total for each by the 4 characters before the brackets [ ] begin. The content in the brackets is not relevant to me and can be done away with in the results table: fail_code_name     value_count c-tr                                  3 e-hw                               3 e-pr                                 2 g-tr                                  2 d-st                                 2
I am trying to create a dashboard with Dynamic Dropdowns using the new JSON of Dashboard Studio. I'm not great at the XML of the Classic Dashboards, but there are a good bit of videos/sites that help... See more...
I am trying to create a dashboard with Dynamic Dropdowns using the new JSON of Dashboard Studio. I'm not great at the XML of the Classic Dashboards, but there are a good bit of videos/sites that help show how to do things and why. Dashboard Studio appears new enough that it doesn't have much for a rookie like me. I'd like something like https://www.youtube.com/watch?v=BJm04grvvf8 but for Dashboard Studio. Anyone know of anything? I am coming up with nothing. I just have some documentation that I'm not great at reading and understanding at https://docs.splunk.com/Documentation/SplunkCloud/latest/DashStudio/inputs found from https://community.splunk.com/t5/Dashboards-Visualizations/Dashboard-Studio-Dynamic-loading-a-dropdown-list/m-p/556552#M38690.
Hello, I have a CSV file in this form :   2021-08-30 15:45:32;MOZILLA;j.dupont;FR6741557ERF;1.1.1.1;CONNEXION;; 2021-08-30 15:45:24;MOZILLA;j.dupont;FR6741557ERF;1.1.1.1;STATUS;;BDD 2021-08-30 15:... See more...
Hello, I have a CSV file in this form :   2021-08-30 15:45:32;MOZILLA;j.dupont;FR6741557ERF;1.1.1.1;CONNEXION;; 2021-08-30 15:45:24;MOZILLA;j.dupont;FR6741557ERF;1.1.1.1;STATUS;;BDD 2021-08-30 15:45:16;MOZILLA;j.dupontFR6741557ERF;1.1.1.1;START;App_start;WEB   Corresponding to these 8 fields : date,application,user,host,ip,type,detail,module I have 2 questions : How can I extract these fields ? How can I extract field at search-time (to be able to be retroactive on old logs) ? This my actuals props.conf and transforms.conf deployed on Search Head + Indexers and the inputs.conf file on the Universal Forwarder : props.conf   [csvlogs] disabled = false TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 19 LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false KV_MODE = none REPORT-fieldsextraction = logs_fields   transforms.conf   [logs_fields] DELIMS = ";" FIELDS = date,application,user,hostname,ip,type,detail,module KEEP_EMPTY_VALS = true   inputs.conf   [Monitor://D:\repository\logs.csv] disabled = false sourcetype=csvlogs index=logs_index1   Do you have solutions ?