All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @danroberts , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
| reverse | streamstats current=f window=1 latest(perc_change) as prev_value | reverse | fillnull value=0 | eval growing = if(perc_change< prev_value,1,0) | table _time GB change perc_change prev_val... See more...
| reverse | streamstats current=f window=1 latest(perc_change) as prev_value | reverse | fillnull value=0 | eval growing = if(perc_change< prev_value,1,0) | table _time GB change perc_change prev_value growing
I don't understand what the inputs have to do with the issues on web UI of Splunk. And before the update to 9.1.1 there were no issues like these. I think there's a bug with 9.1.1 causing these issue... See more...
I don't understand what the inputs have to do with the issues on web UI of Splunk. And before the update to 9.1.1 there were no issues like these. I think there's a bug with 9.1.1 causing these issues. If there would be a way to rollback by changing the inputs.conf file I would be fine testing this again. But repeating all the steps I had done yesterday is indiscutable. This is time wasting!
Hey @carasso and @splunk team I want to build the splunk query using the below requirements: Data Source: sourcetypepcf app_name=xyz HTTP_PATH="/*" Time Frame: The query should cover a 4-week per... See more...
Hey @carasso and @splunk team I want to build the splunk query using the below requirements: Data Source: sourcetypepcf app_name=xyz HTTP_PATH="/*" Time Frame: The query should cover a 4-week period (earliest=-4w). Display: Calculate and display the average count per hour for the current day of the week for HTTP_STATUS_CODE.  Using the reference #https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-Chart-Average-of-Last-4-Thursdays-vs-Today-in-a-Timechart/m-p/167913?_ga=2.262359695.2003626727.1695023755-301331303.1687328075&_gl=1*y4c9e*_ga*MzAxMzMxMzAzLjE2ODczMjgwNzU.*_ga_GS7YF8S63Y*MTY5NTAyMzkyOC4xLjEuMTY5NTAyNjA5Ny4wLjAuMA..*_ga_5EPM2P39FV*MTY5NTAyMzc1Ni4yLjEuMTY5NTAyNjA5OS4wLjAuMA.. We build the query but while we calculate the average we are getting zero results. Query is - [search ] earliest=-4w | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | where current_day == log_day | timechart span=1h avg(count) by HTTP_STATUS_CODE. I would except to take the average by hour for all 4 days and build the timechart span by 1hours for 24 hours.   Can you please for the same...
  2023-08-04 08:53:00.473, ID="15438391", EventClass="10", textdata="exec up_tcsbs_ess_ins_ipsysuser @IID=20231619,@RoleID=NULL,@AdpGuid='F31B78A6-285F-4E8A-A063-8581CEA30AD4',@PersonId='641',@dob=... See more...
  2023-08-04 08:53:00.473, ID="15438391", EventClass="10", textdata="exec up_tcsbs_ess_ins_ipsysuser @IID=20231619,@RoleID=NULL,@AdpGuid='F31B78A6-285F-4E8A-A063-8581CEA30AD4',@PersonId='641',@dob='1991-03-16 00:00:00',@ssn='114784117',@tin=default,@companyname=default,@contactzip='181037802',@hiredate='2023-07-14 00:00:00',@adpUserId=NULL,@associateId=default,@essRoleId='15'", HostName="DC1PRRUNVBT0034", ClientProcessID="20496", ApplicationName=".Net SqlClient Data Provider", LoginName="TcStandard", SPID="5893", Duration="3247079", StartTime="2023-08-04 09:53:00.473", EndTime="2023-08-04 09:53:03.72", Reads="95", Writes="5", CPU="0", Error="0", DatabaseName="iFarm", RowCounts="6", RequestID="0", EventSequence="1447598967", SessionLoginName="TcStandard", ServerName="DC1PRMSPADB40"  
Can Kaspersky Security Center with free license export syslog to Splunk. And if it can, how to configure a new file monitor input at forwarder to export syslog from Kaspersky Security Center?
Hi @Yashvik, very strange! as you can see it works on my Splunk did you exactly copied my search? Ciao. Giuseppe
@irom77 have you configured the outputs in the app's json file? https://docs.splunk.com/Documentation/SOARonprem/6.1.1/DevelopApps/Metadata#Action_Section:_Output 
Probably the easiest way is go back to situation when you have done fresh installation and everything is working. Then just add inputs one by one and see which one broke your environment.  This is a... See more...
Probably the easiest way is go back to situation when you have done fresh installation and everything is working. Then just add inputs one by one and see which one broke your environment.  This is annoying and long time taking process, but still I thing that this is the easiest way.
Hi Splunkers, I have to perform a UF config and I don't know if some problem could rise. Let me explain better. For a customer, we are collecting data from Windows Systems using UF. All selected log... See more...
Hi Splunkers, I have to perform a UF config and I don't know if some problem could rise. Let me explain better. For a customer, we are collecting data from Windows Systems using UF. All selected logs come fine. Now, we have to collect logs from Windows DNS query; they are collected in debug mode and, then, stored in a path. So, before any UF or Splunk action, the flow is: Win DNS set on debug mode -> Log forwarded on a server -> Logs stored on server's path. Due the high volume of collected store, on that server there are 2 scripts that follow a retention policy and, in a nutshell, delete logs older than 1 day. This because when DNS forward logs, write a file of maximum 500 MB and then another one is created. So, files are writed until threshold is reached. Due we want use UF to monitor that path, our customer asked us its behavior regarding file monitoring; his doubt is how UF works when monitoring file, expecially the current writing one. My knoledge is that UF should work exactely any other Data Input File & Directory monitoring: if we tell, in inputs.conf stanza, "monitor path X" it shuld simply monitor each file in a sequential manner; am I right?  
I performed all checks suggested and nothing seem to be wrong; after more than 1 day, logs start to come to cloud. My assumption is that some latency problems delayed log receiving and, after initial... See more...
I performed all checks suggested and nothing seem to be wrong; after more than 1 day, logs start to come to cloud. My assumption is that some latency problems delayed log receiving and, after initial burst, they start to come.
Hello @gcusello  Thanks for the response. Unfortunately, I see only empty values for sourcetype column.  other 3 fields showing the info. 
You need to read up Linux user management, or ask your SysAdmin how to determine such matters. Understandably, Windows user management is totally different Unix and Linux user management.  Unless yo... See more...
You need to read up Linux user management, or ask your SysAdmin how to determine such matters. Understandably, Windows user management is totally different Unix and Linux user management.  Unless your system uses some uncommon admin overlay (which only your SysAdmin can tell you), userdel command can only be executed by root (uid 0).  A non-root user may have sudo privileges to execute commands as root, but this can only be executed as sudo usserdel.  Alternatively, if unprivileged user is allowed root shell, such a user can first use sudo su <shell name> to gain a root shell, then execute userdel in this shell as if it is user root. Most modern Linux systems log full command history.  You didn't say which Linux OS you are using.  You say "(syslog) only shows the name of the user account that was deleted," but without any context like which source file are you looking at.  In Unix-like systems, "syslog" is a OS facility that can be organized in many different ways, i.e., various messages (events) can go to various places. (If you are unsure, ask your SysAdmin.)  You didn't even illustrate a sample log entry. (You can always anonymize; but make sure to preserve formatting and other characteristics.)  Volunteers cannot possibly help with all these ambiguities.
What @PickleRick is trying to say is that you should tell volunteers what "other eventTypes" mean, how their data look like.  I'd like to add Example - If my src_ip=73.09.52.00, then the src_ip sho... See more...
What @PickleRick is trying to say is that you should tell volunteers what "other eventTypes" mean, how their data look like.  I'd like to add Example - If my src_ip=73.09.52.00, then the src_ip should search the other available eventType and filter the result if the user_id=*idp* What does "filter the result" mean?  In many contexts, this phrase is commonly used to mean "to exclude results satisfying such and such."  But in your case, I have a suspicion that you mean the exact opposite. In addition to this question, you also fail to tell volunteers which data do you expect to include AFTER "filter the result"?  Are you interested only in fields from "other eventTypes"?  Only in fields from eventTypes security.threat.detected and security.internal.threat.detected?  Or some fields from eventTypes security.threat.detected and security.internal.threat.detected, some fields from "other eventTypes"?  Which ones? When you ask a question in a user forum, you need to give all and precise relevant information in terms of data, desired results, and the logic between data and desired logic, and not make volunteers take wild guesses.
Hi @nill, good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points a... See more...
Hi @nill, good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
You mention the tolerance being influenced by the spike that has occured itself. Are you fitting your algorithm on data which includes the intended outlier? Using only data you consider normal to fit... See more...
You mention the tolerance being influenced by the spike that has occured itself. Are you fitting your algorithm on data which includes the intended outlier? Using only data you consider normal to fit the function would likely solve your issue here. The same goes for continuous re-training via partial_fit; use this only after all new data has been predicted using  the old model state. If this is not the issue here, some more information regarding what MLTK algorithm you are planning to use, your current parameter setup and what data you are using for your train/test split might give a better idea as to the root cause of your issue.
Hi @gcusello  Thank you so much for the assistance. Greatly appreciated. Regarfs, Nill
First of all, I suspect that by "continuous increase" you actually mean monotonous increase.  Are you thinking of delta instead?  What is the output format you need in the report?  If you want all th... See more...
First of all, I suspect that by "continuous increase" you actually mean monotonous increase.  Are you thinking of delta instead?  What is the output format you need in the report?  If you want all the event details, you can then use eventstats to determine whether there was any decrement. | delta perc_change as delta | eventstats values(delta) as change | where NOT changes < 0 | table _time GB delta perc_change If you do not need every event, you may construct some stats command that is more efficient.
question, the alert should be triggered with both set-1 and set-2 because set-1 have one unchanged event_id whereas set-2 have three unchanged event_id's. In that case,   | stats list(_time) a... See more...
question, the alert should be triggered with both set-1 and set-2 because set-1 have one unchanged event_id whereas set-2 have three unchanged event_id's. In that case,   | stats list(_time) as _time by event_id event_name task_id | where mvcount(_time) > 1 | fieldformat _time = strftime(_time, "%F %H:%M:%S.%3Q")   should suffice.  The emulated dataset 1 gives event_id event_name task_id _time 1274856 pending-transfer 1 2022-09-04 21:40:39.000,2022-09-04 22:10:39.000 Emulated dataset 2 gives event_id event_name task_id _time 1274748 pending-transfer 2 2022-09-04 22:05:39.000,2022-09-04 21:35:39.000 1274856 pending-transfer 1 2022-09-04 22:10:39.000,2022-09-04 21:40:39.000 1274902 pending-transfer 3 2022-09-04 22:00:39.000,2022-09-04 21:30:39.000 Can you show a dataset that the above does not meet the requirement? (Just modify the emulations so we are on the same page.)
Hi @grotti, if you haven't too many comments for each row, you could use: index=notable status_label=Closed | stats values(comment) AS comment BY rule_title | sort 10 -count Ciao. Giuseppe