Hi @Alex_Rus , I don't know if it's a mistyping, but you have to use backslashes in windows paths: [monitor://C:\MyFolder\MyFolder1\*]
disabled = 0
index = MyIndex1
sourcetype = MySourcetype1
[mon...
See more...
Hi @Alex_Rus , I don't know if it's a mistyping, but you have to use backslashes in windows paths: [monitor://C:\MyFolder\MyFolder1\*]
disabled = 0
index = MyIndex1
sourcetype = MySourcetype1
[monitor://C:\Program Files\Microsoft\Exchange Server\...\*]
disabled = 0
index = MyIndex1
sourcetype = MySourcetype1# Ciao. Giuseppe
Hi, Giuseppe! Thank you for your answer. Let me explain the situation. The application is configured to collect logs from four hosts, on two of which the data is collected in the internal storage C:...
See more...
Hi, Giuseppe! Thank you for your answer. Let me explain the situation. The application is configured to collect logs from four hosts, on two of which the data is collected in the internal storage C:\Program Files\Microsoft\Exchange Server\... and the data comes from these hosts correctly. On the other two hosts the data is collected in a folder that is moved to a separate disk C:\MyFolder\MyFolder1\*. My stanza looks like: [monitor://C:/MyFolder\MyFolder1/*] disabled = 0 index = MyIndex1 sourcetype = MySourcetype1 [monitor://C:/Program Files/Microsoft/Exchange Server/.../*] disabled = 0 index = MyIndex1 sourcetype = MySourcetype1#
@KendallW INFO ThruputProcessor [2963 parsing] - Current data throughput (5125 kb/s) has reached maxKBps. As a result, data forwarding may be throttled. Consider increasing the value of maxKBps in ...
See more...
@KendallW INFO ThruputProcessor [2963 parsing] - Current data throughput (5125 kb/s) has reached maxKBps. As a result, data forwarding may be throttled. Consider increasing the value of maxKBps in limits.conf. We will try increasing the limits.
@Hiroshi We are able to access the partner support portal now. Please check. Go to the partner portal : https://splunk.my.site.com/partner/s/ and go to the "My Cases". Karma Points are appreciate...
See more...
@Hiroshi We are able to access the partner support portal now. Please check. Go to the partner portal : https://splunk.my.site.com/partner/s/ and go to the "My Cases". Karma Points are appreciated. !!!
Hi, I have a requirement where I have a table on my dashboard created using dashboard studio. I need to redirect to another dashboard when clicked on Column A cell. Also, when a user clicks on Col...
See more...
Hi, I have a requirement where I have a table on my dashboard created using dashboard studio. I need to redirect to another dashboard when clicked on Column A cell. Also, when a user clicks on Column C cell, the user should be redirected to a URL. How can we achieve this Linking of dashboard and URL on the same table? based on the column clicked.
Hello @ITGSOC , Yes you can migrate a Splunk Enterprise server from a virtual machine (VM) to a physical server. Before starting the migration, make sure to take a complete backup of your Splunk dat...
See more...
Hello @ITGSOC , Yes you can migrate a Splunk Enterprise server from a virtual machine (VM) to a physical server. Before starting the migration, make sure to take a complete backup of your Splunk data, configurations, and any custom settings. Ensure that the physical server meets the hardware requirements for running Splunk and that the operating system is compatible with the version of Splunk you're using. ransfer your configuration files and data from the virtual machine to the physical server. This typically includes files in the etc directory within your Splunk installation ($SPLUNK_HOME/etc). Be sure to copy over apps and any custom configurations. Refer this :https://community.splunk.com/t5/Deployment-Architecture/What-is-the-process-to-move-an-infrastructure-from-virtual/m-p/110175
Hi. Im trying to monitor MSK metrics by CloudWatch input. There is no AWS/Kafka in Namespace list. So i just wrote it and set dimension value as `[{}]`. But i can't get any metric from Cloudwat...
See more...
Hi. Im trying to monitor MSK metrics by CloudWatch input. There is no AWS/Kafka in Namespace list. So i just wrote it and set dimension value as `[{}]`. But i can't get any metric from Cloudwatch Input. Please help me! im using Add-on version 7.0.0
Yes, SSH2 message is key. The actual solution kind of depends on your exact use case/requirement. If you don't particularly care if the user had multiple failures, transaction will do just fine. A...
See more...
Yes, SSH2 message is key. The actual solution kind of depends on your exact use case/requirement. If you don't particularly care if the user had multiple failures, transaction will do just fine. Assuming your sessionID is unique for each connection and that you don't care if attempted user name is the same, simply add startswith and endswith. index=honeypot sourcetype=honeypotLogs
| rex "\s(?<action>Connected) to (?<IP>\S+)"
| rex "\sUser \"(?<user>\S+)\" (?<action>logged in)"
| rex "\sSSH2_MSG_(?<ssh2_msg_type>\w+)"
| rex ": (?<ssh2_message>.+)"
| rex field=ssh2_message "user: (?<user>\S+)"
| transaction sessionID startswith=ssh2_msg_type=USERAUTH_FAILURE endswith=ssh2_msg_type=USERAUTH_SUCCESS The above maybe goes a little overboard in extraction but usually, these semantic elements can be of interest. If you care about attempted user name, you can add user to transaction. If you care about multiple failed attempts, streamstats could be a better approach. The following is an extended emulation; it shows that transaction will only pick up sessions with at least one USERAUTH_FAILURE, and transaction will only include the last event with USERAUTH_FAILURE. | makeresults format=csv data="_raw
[02] Tue 27Aug24 15:20:56 - (143323) Connected to 1.2.3.4
[30] Tue 27Aug24 15:20:56 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob
[31] Tue 27Aug24 15:20:56 - (143323) SSH2_MSG_USERAUTH_FAILURE
[30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob
[31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_FAILURE
[30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob
[02] Tue 27Aug24 15:20:57 - (143323) User \"bob\" logged in
[31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_SUCCESS: successful login
[02] Tue 27Aug24 15:20:58 - (143523) Connected to 1.2.3.4
[30] Tue 27Aug24 15:20:58 - (143523) SSH2_MSG_USERAUTH_REQUEST: user: alice
[02] Tue 27Aug24 15:20:58 - (143523) User \"alice\" logged in
[31] Tue 27Aug24 15:20:58 - (143523) SSH2_MSG_USERAUTH_SUCCESS: successful login"
| rex "^(\S+\s+){2}(?<_time>\S+\s+\S+) - \((?<sessionID>\d+)"
| eval _time = strptime(_time, "%d%b%y %T")
| reverse
``` the above emulates
index=honeypot sourcetype=honeypotLogs
``` Play with the emulation and compare with real data.
Hi @sabari80 , what's your issue? anyway, I created a macro (called e.g. "non_working_hours") and I call it, in this way if I need to modify one hour I have to do this in only one search. In addit...
See more...
Hi @sabari80 , what's your issue? anyway, I created a macro (called e.g. "non_working_hours") and I call it, in this way if I need to modify one hour I have to do this in only one search. In addition, I created a lookup containing all the days of the next three years with the indication of holydays, in this way, in my macro, I can check also holydays, in addition to off office hours and weekends. Ciao. Giuseppe
Hi @nathanielchin , as @ITWhisperer said, in Dashboard Studio there isn't the Post process Search feature, but it's available a very near feature called "chained searches". In other words, you have...
See more...
Hi @nathanielchin , as @ITWhisperer said, in Dashboard Studio there isn't the Post process Search feature, but it's available a very near feature called "chained searches". In other words, you have to create your base search and then create the other searches starting from the base search, chaining the new search to it. For more infos see at https://docs.splunk.com/Documentation/SplunkCloud/latest/DashStudio/dsChain Ciao. Giuseppe
Hi @UnsuperviseLeon , as @PickleRick said, fields are lister in interesting fields only if you have them in at least 20% of the events, you can check these fields putting in the main search one of t...
See more...
Hi @UnsuperviseLeon , as @PickleRick said, fields are lister in interesting fields only if you have them in at least 20% of the events, you can check these fields putting in the main search one of these new fields (e.g. my_field=*). then, it isn't sure that these fields are correctly parsed by the standard Windows parser, you have to check this and eventually add the missing parsings. Ciao. Giuseppe
Hi @st1 , don't use transaction command because it's very slow, please try something like this, adapting my solution to your use case (e.g. the thresholds in the last row): index=honeypot sourcetyp...
See more...
Hi @st1 , don't use transaction command because it's very slow, please try something like this, adapting my solution to your use case (e.g. the thresholds in the last row): index=honeypot sourcetype=honeypotLogs ("SSH2_MSG_USERAUTH_FAILURE" OR "SSH2_MSG_USERAUTH_SUCCESS")
| eval kind=if(searchmatch("SSH2_MSG_USERAUTH_FAILURE", "success","failure")
| stats
dc(kind) AS kind_count)
count(eval(kind="success)) As success_count
count(eval(kind="failure)) As failure_count
BY sessionID
| where kind_count=2 AND success_count>0 AND failure_count>10 Ciao. Giuseppe
Hi @cherrypick , good for you, see next time! For the other people of Community, please describe how you solved the issue. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by...
See more...
Hi @cherrypick , good for you, see next time! For the other people of Community, please describe how you solved the issue. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @irkey , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the c...
See more...
Hi @irkey , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors