All Topics

Top

All Topics

I am trying to conver the GMT time to CST time. I am able to get the desire data using below query. Now I am looking for query to convert GMT time to CST.   index=test AcdId="*" AgentId="*" AgentLo... See more...
I am trying to conver the GMT time to CST time. I am able to get the desire data using below query. Now I am looking for query to convert GMT time to CST.   index=test AcdId="*" AgentId="*" AgentLogon="*" chg="*" seqTimestamp"*" currStateStart="*" currActCodeOid="*" currActStart="*" schedActCodeOid="*" schedActStart="*" nextActCodeOid="*" nextActStart="*" schedDate="*" adherenceStart="*" acdtimediff="*" | eval seqTimestamp=replace(seqTimestamp,"^(.+)T(.+)Z$","\1 \2") | eval currStateStart=replace(currStateStart,"^(.+)T(.+)Z$","\1 \2") | eval currActStart=replace(currActStart,"^(.+)T(.+)Z$","\1 \2") | eval schedActStart=replace(schedActStart,"^(.+)T(.+)Z$","\1 \2") | eval nextActStart=replace(nextActStart,"^(.+)T(.+)Z$","\1 \2") | eval adherenceStart=replace(adherenceStart,"^(.+)T(.+)Z$","\1 \2") | table AcdId, AgentId, AgentLogon, chg, seqTimestamp,seqTimestamp1, currStateStart, currActCodeOid, currActStart, schedActCodeOid, schedActStart, nextActCodeOid, nextActStart, schedDate, adherenceStart, acdtimediff Below are the results I am getting:
Hi, There are a lot of clients in my architecture and every other splunk instance is deployed in either /opt/bank/splunk OR /opt/insurance/splunk OR /opt/splunk   Hence I want to run a command to ... See more...
Hi, There are a lot of clients in my architecture and every other splunk instance is deployed in either /opt/bank/splunk OR /opt/insurance/splunk OR /opt/splunk   Hence I want to run a command to extract list of all clients along with the path where splunkd is running.   How can i achieve this, please suggest
Hello, I´m trying to resolve monitoring issue of available .csv files of specific directory. There are several files marked by different date e.g. 2023-11-16_filename.csv or 2023-11-20_filename.csv. ... See more...
Hello, I´m trying to resolve monitoring issue of available .csv files of specific directory. There are several files marked by different date e.g. 2023-11-16_filename.csv or 2023-11-20_filename.csv. None of them has the same date at the beginning for this reason. I´m able synch with the server most of the files but there are some which I´m not. For example my indexing started on 02.10.23 and all the files matching or later are available as source. But all the files before this date are not e.g. 2023-09-15_filename.csv. What could cause this performance and is there a way how to push files to available as a source even they marked with the date before 02.10.2023 ? Thanks
I have an inputlookup table, in this lookup table there is a JSON array called "Evidence" There is two field I would like to extract, one is "Rule" and the "Criticality". An example of Evidence arra... See more...
I have an inputlookup table, in this lookup table there is a JSON array called "Evidence" There is two field I would like to extract, one is "Rule" and the "Criticality". An example of Evidence array will look like this: {"Evidence":[{"Rule":"Observed in the Wild Telemetry","Criticality":1},{"Rule":"Recent DDoS","Criticality":3}]} So if I eval both "Rule" and Criticality" as shown below: | eval "Rule"=spath(Evidence, "Evidence{}.Rule") | eval "Criticality"=spath(Evidence, "Evidence{}.Criticality") | table Rule Criticality The output will show like this but the Rule & Criticality column doesn't separate into different row (it is all in one row): Rule Criticality Observed in the Wild Telemetry Recent DDoS 1 3 Now the tricky part, I would like display the top count of Rule (top Rule limit=10)  but at the same time display the associated Criticality with the Rule. How do it? since the above does not separate into different row. The final outlook I am looking for, will look like this: Rule Criticality Count Observed in the Wild Telemetry 1 50 Recent DDoS 3 2 An alternative I was thinking was using foreach then concate it into a Combined Field, but I think It is kind of complex.
Does AppDynamics Machine agent support windows 10. I am able to see message as Machine agent started. Under servers I can see the processes of my system running along with PIDs for the system where m... See more...
Does AppDynamics Machine agent support windows 10. I am able to see message as Machine agent started. Under servers I can see the processes of my system running along with PIDs for the system where my machine agent has been hosted. However, I am not able to get %CPU, Disk Memory related metrics. When I try to access same from Metrics browser ,it says no data to display. Please suggest
Hello I have this search :   index="report" | stats count(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= 1 AND total... See more...
Hello I have this search :   index="report" | stats count(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= 1 AND totalNumberOfPatches <= 5, "Low Exposure", totalNumberOfPatches >= 6 AND totalNumberOfPatches <= 9, "Medium Exposure", totalNumberOfPatches >= 10, "High Exposure", totalNumberOfPatches == 0, "Compliant", 1=1, "<not reported>" )   and i want to create pai for each exposure_level and color the pai in different color how can i do it ?  Thanks
hello , I am Masterschool student and trying to install Splunk on my VM and it doesn t work, anyone can help thank you
How to capture >59+ age users accessing their accounts on daily basis in appdynamics? can this be done using information point or do we have any other method to calculate and get the data?
I have installed a free version of Splunk Enterprise 9.1 in my local system. I would need few logs files from my S3 bucket to be sent to Splunk. I have setup up the Splunk Add-on for AWS. In the app... See more...
I have installed a free version of Splunk Enterprise 9.1 in my local system. I would need few logs files from my S3 bucket to be sent to Splunk. I have setup up the Splunk Add-on for AWS. In the app, under configuration, created an account with access ID and secret access key. Then created an input by specifying the account name, bucket name and indexing details. After creating the input, when I search my index and sourcetype, I could not find the logs from S3. I have waited for more than half an hour, then tried again but no luck. As this is the first time I am trying the setup with AWS add-on, I am not sure whether the issue is happening. Could anyone please help me on this?
Hi, We have been informed about a high-severity vulnerability (CVE-2023-46214) impacting Splunk Enterprise (RCE in Splunk Enterprise through Insecure XML Parsing)  as we are on Splunk Cloud Version:... See more...
Hi, We have been informed about a high-severity vulnerability (CVE-2023-46214) impacting Splunk Enterprise (RCE in Splunk Enterprise through Insecure XML Parsing)  as we are on Splunk Cloud Version:9.0.2303.201. Thanks..
Hi all, I have 2 multiselect dropdowns. One is dependent on other dropdown. The first drop down has groups and second has sub groups. I am facing some problem in appending the subgroup value to the... See more...
Hi all, I have 2 multiselect dropdowns. One is dependent on other dropdown. The first drop down has groups and second has sub groups. I am facing some problem in appending the subgroup value to the respective group. For example, lets assume that group has values a b c and only c has subgroup that is x ,y. I want to append that subgroup as c_x and c_y and pass it to the query. I tried adding suffix in dropdown itself. But when the tokens are selected in any order it is adding the sub group to whole token, that is if i select b,c,a it will add subgroup as b,c,a_x and b,c,a_y.   Any suggestions on how can i correctly append the sub group to respective groups and use it in the query.
25/10/2023 6000 31/10/2023 0 6/11/2023 2500 6/11/2023 500 12/11/2023 -7800 16/11/2023 500   i have a table and i'm trying to create a line chart that starts at 6000, then... See more...
25/10/2023 6000 31/10/2023 0 6/11/2023 2500 6/11/2023 500 12/11/2023 -7800 16/11/2023 500   i have a table and i'm trying to create a line chart that starts at 6000, then has a straight line until it hits the date 6/11/2023 at which point it adds a line 90 degrees and goes up to 8500 and so on .. going up at 90 degrees and down at 90 degrees for the negative values keeping the current total thanks,
Hi, I am fairly new to AppDynamics and I am a bit puzzled by some behaviours with Nodejs Transaction Snapshots. Could anyone explain the following? A HTTP request comes into a Nodejs application an... See more...
Hi, I am fairly new to AppDynamics and I am a bit puzzled by some behaviours with Nodejs Transaction Snapshots. Could anyone explain the following? A HTTP request comes into a Nodejs application and it makes another HTTP request to an external service. All the calls are async and there is no specific correlation setup. I am expecting one outbound request for each inbound request. However, I sometimes see many outbound request calls.  Is this because AppD is just sampling the process, at the time of the snapshot, and showing all outbound calls occurring at that time?  Many Thanks H
I've got a new deployment of 9.1.1, upgraded from a prior version, I can't remember which off the top of my head.  I am running Windows 2019 btw, if there is any relevance.   When I log in I get th... See more...
I've got a new deployment of 9.1.1, upgraded from a prior version, I can't remember which off the top of my head.  I am running Windows 2019 btw, if there is any relevance.   When I log in I get the following message     Failed to upgrade KV Store to the latest version. KV Store is running an old version, service(36). Resolve upgrade errors and try to upgrade KV Store to the latest version again. Learn more. 11/20/2023, 12:04:48 PM       If I shutdown splunkd, then run  splunk.exe migrate migrate-kvstore -v  I'll get the following error.     [App Key Value Store migration] Starting migrate-kvstore. Started standalone KVStore update, start_time="2023-11-20 12:00:29". failed to add license to stack enterprise, err - stack already has this license, cannot add again [App Key Value Store migration] Checking if migration is needed. Upgrade type 1. This can take up to 600seconds. 2023-11-20T17:00:30.187Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release. 2023-11-20T17:00:30.193Z F CONTROL [main] Failed global initialization: InvalidSSLConfiguration: CertAddCertificateContextToStore Failed The object or property already exists. mongod exited abnormally (exit code 1, status: exited with code 1) - look at mongod.log to investigate. KV Store process terminated abnormally (exit code 1, status exited with code 1). See mongod.log and splunkd.log for details. WARN: [App Key Value Store migration] Service(40) terminated before the service availability check could complete. Exit code 1, waited for 0 seconds. App Key Value Store migration failed, check the migration log for details. After you have addressed the cause of the service failure, run the migration again, otherwise App Key Value Store won't function.     No entries are ever posted to mongod.log. Just to verify, I cleanred out the var/log/splunk directory.  Moving the folder, and upon running the command, the folders are generated, but the mongod.log file is never created.   Any Advice on how to get the kvstore to migrate?  
Has anyone been successful logging command execution events on RedHat and having them be sent to Splunk via rsyslog? The logs get written to tty but they are not making its way to our HF. We can eas... See more...
Has anyone been successful logging command execution events on RedHat and having them be sent to Splunk via rsyslog? The logs get written to tty but they are not making its way to our HF. We can easily log all of auditd and system events but nothing for command execution. 
Hello, Why does long base search not work in drop down list? For example if the base query on id="StudentName" has a long search "Request-URI Too long" the drop down search did not populate, but ... See more...
Hello, Why does long base search not work in drop down list? For example if the base query on id="StudentName" has a long search "Request-URI Too long" the drop down search did not populate, but it worked just fine on the pie chart Please help.  Thank you so much <search id="StudentName"> <query>index=test</query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> </search> <input type="dropdown" token="StudentTok"> <label>Student Name</label> <fieldForLabel>studentname</fieldForLabel> <fieldForValue>studentname</fieldForValue> <search base="StudentName"> <query>| head 10</query> </search> </input>  
How do I count the number of unique recipients of each type of unique attachment from emails. The same user could receive the same attachment in multiple emails. Using the “dedup” command?
How do I count the number of emails from a search but only get recipients that received ten or more emails?
Hi, I am using an external lookup to basically run a Python script which runs an API call to return the results using a csv.dictwriter to the sys.stdout. There are around 1250 rows being written to... See more...
Hi, I am using an external lookup to basically run a Python script which runs an API call to return the results using a csv.dictwriter to the sys.stdout. There are around 1250 rows being written to the console But only the first 100 rows are being shown in Splunk. How can I disable this 100-row limit on external lookups?   Thank you and have a nice day,   Best,