Hello everyone, currently I am investigating a scenario where a user have opened porn sites on several time. but the user denied doing that. we have logs that shows the sites has been opened on the u...
See more...
Hello everyone, currently I am investigating a scenario where a user have opened porn sites on several time. but the user denied doing that. we have logs that shows the sites has been opened on the user pc, now is there any way by which we can check whether the site was open by the user intentionally or the system has been compromised due to which this kind of situation has occurred. the system doesn't have an active anti virus software.
I am working on developing a Splunk base app for anomaly detection to be released soon. However my 10GB developer licence for testing expired and I want to renew it. I sent an Email to sales@splunk.c...
See more...
I am working on developing a Splunk base app for anomaly detection to be released soon. However my 10GB developer licence for testing expired and I want to renew it. I sent an Email to sales@splunk.com 1 week ago but received no answer. I didnt find a renew button on my Splunk.com profile either. Any suggestions, who I can email to get help?
Hello, guys, I`m collecting logs from VMware Horizon client and here are 2 fields, the meaning of which I don`t clearly understand. Is this length in seconds, minutes, hours? And 2ond quest...
See more...
Hello, guys, I`m collecting logs from VMware Horizon client and here are 2 fields, the meaning of which I don`t clearly understand. Is this length in seconds, minutes, hours? And 2ond question: Maybe someone tried to calculate the session duration of Horizon logins and logons. Because here can be not only situations of simple logon and logoff , but also when the session was timed out and then the user reconnects. In ideal version I would like to check how many hours does the user spend using Horizon agent per day (to control working hours).
Hi, I have below resultset in place. How do I get the Max by row and the Month when the Max happened. Something like below result Module Month when Max occured Max Value AppSDK Aug-20...
See more...
Hi, I have below resultset in place. How do I get the Max by row and the Month when the Max happened. Something like below result Module Month when Max occured Max Value AppSDK Aug-2020 4.21 Comments Aug-2020 0.10 ControlCenter Jan-2020 0.72 Thanks
Hello, I am in the process of optimizing the entire SIEM environment. 1. Do you have any method of creation, prioritization and use cases? 2.How to document your use cases? What tool do ...
See more...
Hello, I am in the process of optimizing the entire SIEM environment. 1. Do you have any method of creation, prioritization and use cases? 2.How to document your use cases? What tool do you use? 3.Did you use any framework or process for this action? Thanks, Fellas
Hi, I need to extract first 7 characters of a network device name, like this: | eval site=substr(Device,1,7) to identify the site But if the the first 7 characte...
See more...
Hi, I need to extract first 7 characters of a network device name, like this: | eval site=substr(Device,1,7) to identify the site But if the the first 7 characters are equals to site=mysite1 I need also the last 4 charatcters of the device name: | substr(Device, -4) If the first 7 characters are !=mysite1 I don't need the last 4 How can I merge those conditions in a unique field? beacause at the end of the query I have : | chart count(Interface) as Interfaces by site so I need only one eval site field Thanks in advance!
Hi team, I have a below sample raw events in splunk. 2020-11-30 19:15:26,726 ratingEnabled="[performance]" 2020-11-30 15:59:13,827 ratingEnabled="[objective,competency]" ratingEnabledGuideline="[o...
See more...
Hi team, I have a below sample raw events in splunk. 2020-11-30 19:15:26,726 ratingEnabled="[performance]" 2020-11-30 15:59:13,827 ratingEnabled="[objective,competency]" ratingEnabledGuideline="[objectiveGuideline]" 2020-11-30 05:34:40,986 ratingEnabled="[performance]" ratingEnabledGuideline="[performanceGuideline]" stats requirement: I want to get a ratio that enabled guideline per each rating. For example: analyze above sample events, 1. there are 3 ratings enabled(refer to ratingEnabled field): performance, objective and competency 2. there're 2 ratings enabled guideline(refer to ratingEnabledGuideline field): objective and performance So the expected result I want splunk to return is: ratingEnabled count count - enabled Guideline ratio - enabled guideline performance 2 1 50% objective 1 1 100% competency 1 0 0% How to stats the ratio in this scenario?
Hi, _introspection reports a higher value for existing memory than the real memory of the machine and also the memory used is wrong (and close to the fake max). OS Tools report the real values whic...
See more...
Hi, _introspection reports a higher value for existing memory than the real memory of the machine and also the memory used is wrong (and close to the fake max). OS Tools report the real values which are way lower. Any idea why and how to fix this? If I see this correctly, _introspection is fed by splunkd. A reboot fixed this for now, but what should be done to prevent this from re-apperauing? thx afx
| server_state=RUNNING | server_health=Component:ServerRuntime , State:HEALTH_OK , MBean:managed2, ReasonCode:[] this is the snippet that i need to get the state from I am a beginner in splunk
Hi everyone, I'm currently facing an issue in search head cluster. when I search an index with long duration (e.g last 3 days) it shows error "Search auto-canceled" in search.log. Our search head c...
See more...
Hi everyone, I'm currently facing an issue in search head cluster. when I search an index with long duration (e.g last 3 days) it shows error "Search auto-canceled" in search.log. Our search head cluster has three nodes and behind a Kong API gateway. On Kong we've already configured sticky session (hash on cookies named: splunkd_8000) I tried to test three cases below: - search directly on one node of the cluster (traffic does not go through Kong) --> OK - disable two nodes and search on domain (that means traffic go through Kong) --> OK - search on domain with three nodes cluster --> show above error please see below picture if above description not clear enough. Anyone has idea to resolve this issue? Thank you very much!
Hi, I have configured eStreamer according to cisco official guide, but there are no message to splunk and error message below Can someone help resolve the issue, Thanks https://www.cisco.co...
See more...
Hi, I have configured eStreamer according to cisco official guide, but there are no message to splunk and error message below Can someone help resolve the issue, Thanks https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSplunkOperationsGuide_409.html ./splencore.sh test show message below
Splunk 7.2 with cluster of 2 indexers. I want to push indexes definitions to peers from master node and the bundle action fails stating that "Auto" is illegal value for repFactor and that the def...
See more...
Splunk 7.2 with cluster of 2 indexers. I want to push indexes definitions to peers from master node and the bundle action fails stating that "Auto" is illegal value for repFactor and that the default value is "0". I want them to replicate so "0" is not an option based on the documentation. Bundle action is unsuccessful. What could be the reason for this error? Both peers are up, no errors on the Master Indexer Clustering dashboard. Content of the Master Node indexes.conf in the /opt/splunk/etc/master-apss/_cluster/local/ [log_windows]
repFactor = Auto
enableDataIntegrityControl = 0
enableTsidxReduction = 0
maxTotalDataSizeMB = 512000
coldPath = $SPLUNK_DB/log_windows/colddb
homePath = $SPLUNK_DB/log_windows/db
thawedPath = $SPLUNK_DB/log_windows/thaweddb
[pg_test]
repFactor = Auto
enableDataIntegrityControl = 0
enableTsidxReduction = 0
maxTotalDataSizeMB = 512000
coldPath = $SPLUNK_DB/pg_test_kiwi/colddb
homePath = $SPLUNK_DB/pg_test_kiwi/db
thawedPath = $SPLUNK_DB/pg_test_kiwi/thaweddb Errors: Config validation failure reported in peer=sanitized_indexer1 guid=(sanitized_guid_1). stanza=pg_test parameter=repFactor Value supplied='Auto' is illegal; default='0'; stanza=log_windows parameter=repFactor Value supplied='Auto' is illegal; default='0';
Config validation failure reported in peer=sanitized_indexer2 guid=(sanitized_guid_2). stanza=pg_test parameter=repFactor Value supplied='Auto' is illegal; default='0'; stanza=log_windows parameter=repFactor Value supplied='Auto' is illegal; default='0';
I have a reg ex tested and working that will extract the host out of these events. My transforms is as follows: [hostextraction]
REGEX = ^.*\d+\s(.*)ASM:.*
FORMAT = host::$1
DEST_KEY ...
See more...
I have a reg ex tested and working that will extract the host out of these events. My transforms is as follows: [hostextraction]
REGEX = ^.*\d+\s(.*)ASM:.*
FORMAT = host::$1
DEST_KEY = MetaData:Host props: [myst]
DATETIME_CONFIG =
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = true
TIME_PREFIX = ^.{1,16}\b(?:\d{1,3}\.){3}\d{1,3}\b\s
TRANSFORMS-whateva = hostextraction This has no affect on the host metadata key. Any help is much appreciated. I am taking this directly from Splunk Documentation. I am getting this message in _internal ERROR regexExtractionProcessor - REGEX field must be specified tranform_name=hostextraction Any help is much appreciated!
Hi Team, “Please be informed that we are working on adding the support for AppDynamics in our product and we want to test our implementation. We would like to know how we can test our implemen...
See more...
Hi Team, “Please be informed that we are working on adding the support for AppDynamics in our product and we want to test our implementation. We would like to know how we can test our implementation using trail license or would be interested to know if there are any tools to test it.” Note : We found that using the trail license we are not able to test our implementation as the trail license is not applicable for C/C++ SDK. As we are just adding the support for Appdynamics and it will be used by our customers we do not want to buy the license to just test our implementation. Also below thread in the appdynamics community discusses the same. https://community.appdynamics.com/t5/Controller-SaaS-On-Premise/C-native-client-monitoring-does-not-work/td-p/32526 Thank you, Rajesh
Hi my aim is to create an alert that will perform first search and look for at least 10 similar events within last 30 minutes. Then using another search it will look for at least one occurrence...
See more...
Hi my aim is to create an alert that will perform first search and look for at least 10 similar events within last 30 minutes. Then using another search it will look for at least one occurrence of different log At the end it should compare which log came first using min(_time) and trigger alert only if both searches returned logs. Checking the time can be omitted but it is important that both searches have returned anything. Both searches works fine when I run them separately, but then I am appending one to the other, I am receiving only logs from the 1 search. As an outcome would like to see the logs/events instead of stats table. I am a novice so if anybody could give me some hints how could I make it all work as expected, it would be great. Here's what I have done so far: index="index1" (Message=SEARCH1 earliest=-31m@m latest=-1m@m )
| transaction Message SrcIP | where eventcount > 10
| search Message="SEARCH1"
| eventstats min(_time) as condition1
| append
[ | search index="index1" SEARCH3 earliest=-61m@m latest=-1m@m
| eventstats min(_time) as condition2 ]
| where condition1 > condition2
Estou realizando o Boss Of The SOC v1 - SIEM, Splunk, porém não estou conseguindo entender a pergunta 14 Um dos domínios encenados do Po1s0n1vy tem algumas informações Whois “exclusivas” desarticula...
See more...
Estou realizando o Boss Of The SOC v1 - SIEM, Splunk, porém não estou conseguindo entender a pergunta 14 Um dos domínios encenados do Po1s0n1vy tem algumas informações Whois “exclusivas” desarticuladanão sei o que colocars. Concatene os dois códigos e envie como uma única resposta. não estou conseguindo achar a resposta ... alguma sugestão?
I have the following search: index=aa sourcetype="bb" Service="/abc" OR Service="/mno" OR Service="/xyz" | chart count over Service by ZCode | addtotals This returns the count of ZCode across /abc,...
See more...
I have the following search: index=aa sourcetype="bb" Service="/abc" OR Service="/mno" OR Service="/xyz" | chart count over Service by ZCode | addtotals This returns the count of ZCode across /abc, /mno, and/xyz. Example would look like this: Service 200 400 500 504 Total /abc 2 3 1 2 8 /mno 1 4 0 1 6 /xyz 4 0 5 2 11 The problem I'm facing is if the values for 200, 400, 500, and 504 are all zeros across one service, it will no longer display that service. Very new to Splunk but did a fair amount of research but could not find a solution. Thank you!