All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I have a requirement where I have a table on my dashboard created using dashboard studio. I need to redirect to another dashboard when clicked on Column A cell. Also, when a user clicks on Col... See more...
Hi, I have a requirement where I have a table on my dashboard created using dashboard studio. I need to redirect to another dashboard when clicked on Column A cell. Also, when a user clicks on Column C cell, the user should be redirected to a URL. How can we achieve this Linking of dashboard and URL on the same table? based on the column clicked.
Hello @ITGSOC  , Yes you can migrate a Splunk Enterprise server from a virtual machine (VM) to a physical server. Before starting the migration, make sure to take a complete backup of your Splunk dat... See more...
Hello @ITGSOC  , Yes you can migrate a Splunk Enterprise server from a virtual machine (VM) to a physical server. Before starting the migration, make sure to take a complete backup of your Splunk data, configurations, and any custom settings. Ensure that the physical server meets the hardware requirements for running Splunk and that the operating system is compatible with the version of Splunk you're using.  ransfer your configuration files and data from the virtual machine to the physical server. This typically includes files in the etc directory within your Splunk installation ($SPLUNK_HOME/etc). Be sure to copy over apps and any custom configurations.    Refer this  :https://community.splunk.com/t5/Deployment-Architecture/What-is-the-process-to-move-an-infrastructure-from-virtual/m-p/110175 
Can I migrate the Splunk Enterprise server from virtual machine to physical server?
Hi. Im trying to monitor MSK metrics by CloudWatch input.  There is no AWS/Kafka in Namespace list. So i just wrote it and set dimension value as  `[{}]`.  But i can't get any metric from Cloudwat... See more...
Hi. Im trying to monitor MSK metrics by CloudWatch input.  There is no AWS/Kafka in Namespace list. So i just wrote it and set dimension value as  `[{}]`.  But i can't get any metric from Cloudwatch Input. Please help me!    im using Add-on version 7.0.0      
Yes, SSH2 message is key.  The actual solution kind of depends on your exact use case/requirement.  If you don't particularly care if the user had multiple failures, transaction will do just fine.  A... See more...
Yes, SSH2 message is key.  The actual solution kind of depends on your exact use case/requirement.  If you don't particularly care if the user had multiple failures, transaction will do just fine.  Assuming your sessionID is unique for each connection and that you don't care if attempted user name is the same, simply add startswith and endswith. index=honeypot sourcetype=honeypotLogs | rex "\s(?<action>Connected) to (?<IP>\S+)" | rex "\sUser \"(?<user>\S+)\" (?<action>logged in)" | rex "\sSSH2_MSG_(?<ssh2_msg_type>\w+)" | rex ": (?<ssh2_message>.+)" | rex field=ssh2_message "user: (?<user>\S+)" | transaction sessionID startswith=ssh2_msg_type=USERAUTH_FAILURE endswith=ssh2_msg_type=USERAUTH_SUCCESS The above maybe goes a little overboard in extraction but usually, these semantic elements can be of interest. If you care about attempted user name, you can add user to transaction.  If you care about multiple failed attempts, streamstats could be a better approach. The following is an extended emulation; it shows that transaction will only pick up sessions with at least one USERAUTH_FAILURE, and transaction will only include the last event with USERAUTH_FAILURE. | makeresults format=csv data="_raw [02] Tue 27Aug24 15:20:56 - (143323) Connected to 1.2.3.4 [30] Tue 27Aug24 15:20:56 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [31] Tue 27Aug24 15:20:56 - (143323) SSH2_MSG_USERAUTH_FAILURE [30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_FAILURE [30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [02] Tue 27Aug24 15:20:57 - (143323) User \"bob\" logged in [31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_SUCCESS: successful login [02] Tue 27Aug24 15:20:58 - (143523) Connected to 1.2.3.4 [30] Tue 27Aug24 15:20:58 - (143523) SSH2_MSG_USERAUTH_REQUEST: user: alice [02] Tue 27Aug24 15:20:58 - (143523) User \"alice\" logged in [31] Tue 27Aug24 15:20:58 - (143523) SSH2_MSG_USERAUTH_SUCCESS: successful login" | rex "^(\S+\s+){2}(?<_time>\S+\s+\S+) - \((?<sessionID>\d+)" | eval _time = strptime(_time, "%d%b%y %T") | reverse ``` the above emulates index=honeypot sourcetype=honeypotLogs ``` Play with the emulation and compare with real data.
I see thank you for letting me know!  
Hi @sabari80 , what's your issue? anyway, I created a macro (called e.g. "non_working_hours") and I call it, in this way if I need to modify one hour I have to do this in only one search. In addit... See more...
Hi @sabari80 , what's your issue? anyway, I created a macro (called e.g. "non_working_hours") and I call it, in this way if I need to modify one hour I have to do this in only one search. In addition, I created a lookup containing all the days of the next three years with the indication of holydays, in this way, in my macro, I can check also holydays, in addition to off office hours and weekends. Ciao. Giuseppe 
Hi @nathanielchin , as @ITWhisperer said, in Dashboard Studio there isn't the Post process Search feature, but it's available a very near feature called "chained searches". In other words, you have... See more...
Hi @nathanielchin , as @ITWhisperer said, in Dashboard Studio there isn't the Post process Search feature, but it's available a very near feature called "chained searches". In other words, you have to create your base search and then create the other searches starting from the base search, chaining the new search to it. For more infos see at https://docs.splunk.com/Documentation/SplunkCloud/latest/DashStudio/dsChain  Ciao. Giuseppe
Hi @UnsuperviseLeon , as @PickleRick said, fields are lister in interesting fields only if you have them in at least 20% of the events, you can check these fields putting in the main search one of t... See more...
Hi @UnsuperviseLeon , as @PickleRick said, fields are lister in interesting fields only if you have them in at least 20% of the events, you can check these fields putting in the main search one of these new fields (e.g. my_field=*). then, it isn't sure that these fields are correctly parsed by the standard Windows parser, you have to check this and eventually add the missing parsings. Ciao. Giuseppe
Hi @st1 , don't use transaction command because it's very slow, please try something like this, adapting my solution to your use case (e.g. the thresholds in the last row): index=honeypot sourcetyp... See more...
Hi @st1 , don't use transaction command because it's very slow, please try something like this, adapting my solution to your use case (e.g. the thresholds in the last row): index=honeypot sourcetype=honeypotLogs ("SSH2_MSG_USERAUTH_FAILURE" OR "SSH2_MSG_USERAUTH_SUCCESS") | eval kind=if(searchmatch("SSH2_MSG_USERAUTH_FAILURE", "success","failure") | stats dc(kind) AS kind_count) count(eval(kind="success)) As success_count count(eval(kind="failure)) As failure_count BY sessionID | where kind_count=2 AND success_count>0 AND failure_count>10  Ciao. Giuseppe
Hi @cherrypick , good for you, see next time! For the other people of Community, please describe how you solved the issue. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by... See more...
Hi @cherrypick , good for you, see next time! For the other people of Community, please describe how you solved the issue. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @sgro777 , sorry, my error, please try: eventtype=builder (user_id IN ($id$) OR user_mail IN ($email$)) | eval ... Ciao. Giuseppe
Hi @irkey , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the c... See more...
Hi @irkey , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
nevermind doesn't work  
Hi,   Sorry for the confusion , I just pasted a single input stanza , however I have 8 different monitoring stanza's in my inputs.conf and they are all working and ingesting the data. crcSalt = ... See more...
Hi,   Sorry for the confusion , I just pasted a single input stanza , however I have 8 different monitoring stanza's in my inputs.conf and they are all working and ingesting the data. crcSalt = <DATETIME> What It Does: This setting includes the file's last modification time in the checksum calculation. Use Case: It's useful when you want Splunk to reindex the file if the file's last modified timestamp changes, even if the content stays the same. So for my usecase I need to ingest the complete csv file data daily , so used crcSalt = <DATETIME>. (Im doing right or wrong , please correct) Small set of data means only getting few rows data from the csv file and not the complete csv data. Can you please help.   Thank you    
how to check index and volume parameters and index size
1. You obviously can't read data from 8 files if you have input set for just one of them 2. Leave the crcSalt setting alone. It is very very rarely needed. Usually you should rather set initCrcLe... See more...
1. You obviously can't read data from 8 files if you have input set for just one of them 2. Leave the crcSalt setting alone. It is very very rarely needed. Usually you should rather set initCrcLength if the files have common header/preamble 3. What do you mean by "small set of data is being ingested"? 4. Did you check splunk list monitor and splunk list inputstatus
Are you looking for something like this?   index=itsi_summary | eval kpiid = mvappend(kpiid, itsi_kpi_id) | stats latest(alert_value) as alert_value latest(alert_severity) as health_score by ... See more...
Are you looking for something like this?   index=itsi_summary | eval kpiid = mvappend(kpiid, itsi_kpi_id) | stats latest(alert_value) as alert_value latest(alert_severity) as health_score by kpiid kpi | join type=left kpiid [| inputlookup service_kpi_lookup | stats latest(title) as title by kpis._key | rename kpis._key as kpiid ] | search title IN ("<Service Names>") kpi!="ServiceHealthScore"  
Hi, Im currently working on ingesting 8 csv files from a path using inputs.conf on a UF. And the data is getting ingested . The issue is these 8 csv files are overwritten daily by new data by a aut... See more...
Hi, Im currently working on ingesting 8 csv files from a path using inputs.conf on a UF. And the data is getting ingested . The issue is these 8 csv files are overwritten daily by new data by a automation script so the data inside the csv file is changed daily.   I want to ingest the complete csv data daily into Splunk , but what I can see is only a small set of data is getting ingested but not the complete csv file data.   My inputs.conf is  [monitor://C:\file.csv] disabled = false sourcetype = xyz index = abcd crcSalt = <DATETIME>   Can someone please help me , whether Im using the correct input or not?   The ultimate requirement is to ingest the complete csv data from the 8 csv files daily into Splunk.   Thank you.
How to check the splunk lsit monitor/ where etc