All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try something like this index="report" | stats count(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= 1 AND totalNumberO... See more...
Try something like this index="report" | stats count(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= 1 AND totalNumberOfPatches <= 5, "Low Exposure", totalNumberOfPatches >= 6 AND totalNumberOfPatches <= 9, "Medium Exposure", totalNumberOfPatches >= 10, "High Exposure", totalNumberOfPatches == 0, "Compliant", 1=1, "<not reported>" ) | xyseries Computer_Name exposure_level totalNumberOfPatches Then set your trellis to be by exposure_level
I am trying to conver the GMT time to CST time. I am able to get the desire data using below query. Now I am looking for query to convert GMT time to CST.   index=test AcdId="*" AgentId="*" AgentLo... See more...
I am trying to conver the GMT time to CST time. I am able to get the desire data using below query. Now I am looking for query to convert GMT time to CST.   index=test AcdId="*" AgentId="*" AgentLogon="*" chg="*" seqTimestamp"*" currStateStart="*" currActCodeOid="*" currActStart="*" schedActCodeOid="*" schedActStart="*" nextActCodeOid="*" nextActStart="*" schedDate="*" adherenceStart="*" acdtimediff="*" | eval seqTimestamp=replace(seqTimestamp,"^(.+)T(.+)Z$","\1 \2") | eval currStateStart=replace(currStateStart,"^(.+)T(.+)Z$","\1 \2") | eval currActStart=replace(currActStart,"^(.+)T(.+)Z$","\1 \2") | eval schedActStart=replace(schedActStart,"^(.+)T(.+)Z$","\1 \2") | eval nextActStart=replace(nextActStart,"^(.+)T(.+)Z$","\1 \2") | eval adherenceStart=replace(adherenceStart,"^(.+)T(.+)Z$","\1 \2") | table AcdId, AgentId, AgentLogon, chg, seqTimestamp,seqTimestamp1, currStateStart, currActCodeOid, currActStart, schedActCodeOid, schedActStart, nextActCodeOid, nextActStart, schedDate, adherenceStart, acdtimediff Below are the results I am getting:
pie chart
I got some help from a co-worker which looks to solve my issue, here is the query that he provided me with. All the credit goes to him btw! | tstats summariesonly=true count from datamodel="Aut... See more...
I got some help from a co-worker which looks to solve my issue, here is the query that he provided me with. All the credit goes to him btw! | tstats summariesonly=true count from datamodel="Authentication" WHERE Authentication.action="failure" AND Authentication.user="*" AND Authentication.src="*" AND Authentication.user!=*$ by _time span=1d,Authentication.user | `drop_dm_object_name("Authentication")` | sort 0 - _time | eval date=strftime(_time,"%Y-%m-%d %H:%M:%S") | transaction user maxpause=24h mvlist=true | stats max(eventcount) as maxeventcount by user | where maxeventcount>5 | rename maxeventcount as DaysInRow Using the transaction command instead (which i need to study abit to understand) and also works around the issue with the sort command limitation of 10000 events.  I will let him know about the approach you're suggesting and see what he thinks about that one. 
Where is this information coming from?
| spath Evidence{} output=Evidence | mvexpand Evidence | spath input=Evidence | stats count by Rule Criticality
Hello @Stives , How does your Inputstanza looks like? If no crcSalt is specified in the stanza, Splunk will look into the first (i think 256) Bytes of a file and determines based on that if it alre... See more...
Hello @Stives , How does your Inputstanza looks like? If no crcSalt is specified in the stanza, Splunk will look into the first (i think 256) Bytes of a file and determines based on that if it already know the File. If the first Bytes in the CSV files will always be the same you could change your inputstanza and add      crcSalt = <SOURCE>     docs to monitoring stanza for a deeper look into crcSalt:  https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf#MONITOR: But be cautious, this will tell splunk to watch for the full path to determine if this file is already been indexed, so there is a possibility that you index the same file twice. Especially for Directories with rolling logfiles. Other possibility could be that the dates are out of the retention time scope. (If the files got indexed once but due to retention time got removed again when its bucket is not hot anymore)
Hi, @gcusello ,   Thanks for the reply. I have one concern, in the mutliselect dropdown the values selected will be a,b,c or b,c,a etc which will be comma separated. In such conditions will this l... See more...
Hi, @gcusello ,   Thanks for the reply. I have one concern, in the mutliselect dropdown the values selected will be a,b,c or b,c,a etc which will be comma separated. In such conditions will this logic will work?
What is pai?
Hi, There are a lot of clients in my architecture and every other splunk instance is deployed in either /opt/bank/splunk OR /opt/insurance/splunk OR /opt/splunk   Hence I want to run a command to ... See more...
Hi, There are a lot of clients in my architecture and every other splunk instance is deployed in either /opt/bank/splunk OR /opt/insurance/splunk OR /opt/splunk   Hence I want to run a command to extract list of all clients along with the path where splunkd is running.   How can i achieve this, please suggest
Hello, I´m trying to resolve monitoring issue of available .csv files of specific directory. There are several files marked by different date e.g. 2023-11-16_filename.csv or 2023-11-20_filename.csv. ... See more...
Hello, I´m trying to resolve monitoring issue of available .csv files of specific directory. There are several files marked by different date e.g. 2023-11-16_filename.csv or 2023-11-20_filename.csv. None of them has the same date at the beginning for this reason. I´m able synch with the server most of the files but there are some which I´m not. For example my indexing started on 02.10.23 and all the files matching or later are available as source. But all the files before this date are not e.g. 2023-09-15_filename.csv. What could cause this performance and is there a way how to push files to available as a source even they marked with the date before 02.10.2023 ? Thanks
I have an inputlookup table, in this lookup table there is a JSON array called "Evidence" There is two field I would like to extract, one is "Rule" and the "Criticality". An example of Evidence arra... See more...
I have an inputlookup table, in this lookup table there is a JSON array called "Evidence" There is two field I would like to extract, one is "Rule" and the "Criticality". An example of Evidence array will look like this: {"Evidence":[{"Rule":"Observed in the Wild Telemetry","Criticality":1},{"Rule":"Recent DDoS","Criticality":3}]} So if I eval both "Rule" and Criticality" as shown below: | eval "Rule"=spath(Evidence, "Evidence{}.Rule") | eval "Criticality"=spath(Evidence, "Evidence{}.Criticality") | table Rule Criticality The output will show like this but the Rule & Criticality column doesn't separate into different row (it is all in one row): Rule Criticality Observed in the Wild Telemetry Recent DDoS 1 3 Now the tricky part, I would like display the top count of Rule (top Rule limit=10)  but at the same time display the associated Criticality with the Rule. How do it? since the above does not separate into different row. The final outlook I am looking for, will look like this: Rule Criticality Count Observed in the Wild Telemetry 1 50 Recent DDoS 3 2 An alternative I was thinking was using foreach then concate it into a Combined Field, but I think It is kind of complex.
Does AppDynamics Machine agent support windows 10. I am able to see message as Machine agent started. Under servers I can see the processes of my system running along with PIDs for the system where m... See more...
Does AppDynamics Machine agent support windows 10. I am able to see message as Machine agent started. Under servers I can see the processes of my system running along with PIDs for the system where my machine agent has been hosted. However, I am not able to get %CPU, Disk Memory related metrics. When I try to access same from Metrics browser ,it says no data to display. Please suggest
I am curious to know about a couple of things related to fetching S3 logs. Is there any limitation in the number of inputs which we create in the AWS add-on? Is there any limitation on indexes on ... See more...
I am curious to know about a couple of things related to fetching S3 logs. Is there any limitation in the number of inputs which we create in the AWS add-on? Is there any limitation on indexes on which we log the S3 data?
Yes, there was some error with endpoint. I have checked the error via below query index=_internal sourcetype=aws:s3:log ERROR
Hello I have this search :   index="report" | stats count(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= 1 AND total... See more...
Hello I have this search :   index="report" | stats count(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= 1 AND totalNumberOfPatches <= 5, "Low Exposure", totalNumberOfPatches >= 6 AND totalNumberOfPatches <= 9, "Medium Exposure", totalNumberOfPatches >= 10, "High Exposure", totalNumberOfPatches == 0, "Compliant", 1=1, "<not reported>" )   and i want to create pai for each exposure_level and color the pai in different color how can i do it ?  Thanks
@splunk Is there any solution for this?
That's another question, completely unrelated to the original issue. See my response in this thread https://community.splunk.com/t5/Deployment-Architecture/What-are-the-best-practices-for-creating-an... See more...
That's another question, completely unrelated to the original issue. See my response in this thread https://community.splunk.com/t5/Deployment-Architecture/What-are-the-best-practices-for-creating-and-distributing-apps/m-p/668990 for managing apps.
You can do tstats from datamodel. That's not so minimal You simply change your initial change to | tstats [...] by Authentication.user _time span=1d That's what gives you data with which you ca... See more...
You can do tstats from datamodel. That's not so minimal You simply change your initial change to | tstats [...] by Authentication.user _time span=1d That's what gives you data with which you can work further. | `drop_dm_object_name("Authentication")` This doesn't hurt us I admit, the next step is a bit advanced. I won't yet give you a complete solution but I will point you in the right direction You need to do streamstats to carry over the information when was the last occurrence of given user in those statistics. | streamstats current=f last(_time) as last by user This will give you last time the given user was included along with your "current" occurrence. Now you can see whether the difference is just one day or more which will tell you whether the streak was continuous or not. That's for start.
Hello,  After adding that property I have now error: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "Certificates do not ... See more...
Hello,  After adding that property I have now error: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "Certificates do not conform to algorithm constraints". ClientConnectionId:f6d04b57-f4f7-4bc2-93ca-d3ac59ad7b4b Splunk: 9.0.4.1 Db Connect: 3.14.1 Java: openjdk 17.0.8.1   Funny thing i don't have that problem on other server Splunk: 9.0.4.1 DB Connect: 3.12.2 Java: openjdk 17.0.7