All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How to change font size of texts inside bar charts, column, table using dashboard xml source?    i tried font-size: 15 but didn't work in xml source
Hi @ChaoticMike, in Splunk you have: _time that's the event timestamp, _indextime that's the time whe the event is indexed. so you could calculate a difference between these two fields: index=... See more...
Hi @ChaoticMike, in Splunk you have: _time that's the event timestamp, _indextime that's the time whe the event is indexed. so you could calculate a difference between these two fields: index=* | eval diff=_indextime-_time | stats avg(diff) AS diff_avg max(diff) AS diff_max min(diff) AS diff_min BY index Ciao. Giuseppe
Hello, For solid reasons that I can't go into here, we have a topology of... AWS CloudWatch-> Kinesis Firehose -> AWS Delivery Stream Object ->AWS Lambda ->HEC listener on a Heavy Forwarder ->  Th... See more...
Hello, For solid reasons that I can't go into here, we have a topology of... AWS CloudWatch-> Kinesis Firehose -> AWS Delivery Stream Object ->AWS Lambda ->HEC listener on a Heavy Forwarder ->  That Heavy Forwarder -> Another Heavy Forwarder -> Splunk Cloud.  I'm pretty sure that (apart from having 1 HF forward to a second before hitting Splunk Cloud), that is the reference architecture for CloudWatch events. There is no Splunk indexing going on in our infrastructure.  We are just forwarding loads of information to Splunk Cloud for indexing and analysis there. We can establish latency through most of that chain, but we are interesting in determining the latency from when our events land in Splunk Cloud, to those events being visible for analysis.  Is there a handy metric or query we can re-use? Thanks in advance...
Try something like this [search ] earliest=-4w | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | where current_day == log_day | eval hour=strftime(_time, "%H") |... See more...
Try something like this [search ] earliest=-4w | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | where current_day == log_day | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%d") | stats count by hour day HTTP_STATUS_CODE | chart avg(count) as average by hour HTTP_STATUS_CODE
Hi @man03359, what do you mean with "src_ips that have mismatched src name and device name."? Maybe src_ips that have different src_name or different device_name? if this is your requirement, plea... See more...
Hi @man03359, what do you mean with "src_ips that have mismatched src name and device name."? Maybe src_ips that have different src_name or different device_name? if this is your requirement, please try this: index="idx-network-firewall" (sourcetype="fgt_traffic" OR sourcetype="fortigate_traffic") | lookup Stores_Inventory src_ip OUTPUT Device | stats latest(_time) AS latest values(srcname) as srcname latest(app) as app dc(srcname) AS srcname_count dc(Device) AS Device_count BY src_ip | where srcname_count>1 OR Device_count>1 | table src_ip Device src app In this way you'll list all the src_ips with more than one name or device. Ciao. Giuseppe
Hi All, Below is my search query - index="idx-network-firewall" (sourcetype="fgt_traffic" OR sourcetype="fortigate_traffic") | stats latest(_time) values(srcname) as src latest(app) as app by sr... See more...
Hi All, Below is my search query - index="idx-network-firewall" (sourcetype="fgt_traffic" OR sourcetype="fortigate_traffic") | stats latest(_time) values(srcname) as src latest(app) as app by src_ip | lookup Stores_Inventory src_ip OUTPUT Device | table src_ip Device src app  I have 3 fields src_ip, src and device. I am getting the field values for src from the first 2 lines of the query - index="idx-network-firewall" (sourcetype="fgt_traffic" OR sourcetype="fortigate_traffic") | stats latest(_time) values(srcname) as src latest(app) as app by src_ip  I am trying to build a search query that finds src_ips that have mismatched src name and device name.   Thanks in advance.
Hi @danroberts , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
| reverse | streamstats current=f window=1 latest(perc_change) as prev_value | reverse | fillnull value=0 | eval growing = if(perc_change< prev_value,1,0) | table _time GB change perc_change prev_val... See more...
| reverse | streamstats current=f window=1 latest(perc_change) as prev_value | reverse | fillnull value=0 | eval growing = if(perc_change< prev_value,1,0) | table _time GB change perc_change prev_value growing
I don't understand what the inputs have to do with the issues on web UI of Splunk. And before the update to 9.1.1 there were no issues like these. I think there's a bug with 9.1.1 causing these issue... See more...
I don't understand what the inputs have to do with the issues on web UI of Splunk. And before the update to 9.1.1 there were no issues like these. I think there's a bug with 9.1.1 causing these issues. If there would be a way to rollback by changing the inputs.conf file I would be fine testing this again. But repeating all the steps I had done yesterday is indiscutable. This is time wasting!
Hey @carasso and @splunk team I want to build the splunk query using the below requirements: Data Source: sourcetypepcf app_name=xyz HTTP_PATH="/*" Time Frame: The query should cover a 4-week per... See more...
Hey @carasso and @splunk team I want to build the splunk query using the below requirements: Data Source: sourcetypepcf app_name=xyz HTTP_PATH="/*" Time Frame: The query should cover a 4-week period (earliest=-4w). Display: Calculate and display the average count per hour for the current day of the week for HTTP_STATUS_CODE.  Using the reference #https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-Chart-Average-of-Last-4-Thursdays-vs-Today-in-a-Timechart/m-p/167913?_ga=2.262359695.2003626727.1695023755-301331303.1687328075&_gl=1*y4c9e*_ga*MzAxMzMxMzAzLjE2ODczMjgwNzU.*_ga_GS7YF8S63Y*MTY5NTAyMzkyOC4xLjEuMTY5NTAyNjA5Ny4wLjAuMA..*_ga_5EPM2P39FV*MTY5NTAyMzc1Ni4yLjEuMTY5NTAyNjA5OS4wLjAuMA.. We build the query but while we calculate the average we are getting zero results. Query is - [search ] earliest=-4w | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | where current_day == log_day | timechart span=1h avg(count) by HTTP_STATUS_CODE. I would except to take the average by hour for all 4 days and build the timechart span by 1hours for 24 hours.   Can you please for the same...
  2023-08-04 08:53:00.473, ID="15438391", EventClass="10", textdata="exec up_tcsbs_ess_ins_ipsysuser @IID=20231619,@RoleID=NULL,@AdpGuid='F31B78A6-285F-4E8A-A063-8581CEA30AD4',@PersonId='641',@dob=... See more...
  2023-08-04 08:53:00.473, ID="15438391", EventClass="10", textdata="exec up_tcsbs_ess_ins_ipsysuser @IID=20231619,@RoleID=NULL,@AdpGuid='F31B78A6-285F-4E8A-A063-8581CEA30AD4',@PersonId='641',@dob='1991-03-16 00:00:00',@ssn='114784117',@tin=default,@companyname=default,@contactzip='181037802',@hiredate='2023-07-14 00:00:00',@adpUserId=NULL,@associateId=default,@essRoleId='15'", HostName="DC1PRRUNVBT0034", ClientProcessID="20496", ApplicationName=".Net SqlClient Data Provider", LoginName="TcStandard", SPID="5893", Duration="3247079", StartTime="2023-08-04 09:53:00.473", EndTime="2023-08-04 09:53:03.72", Reads="95", Writes="5", CPU="0", Error="0", DatabaseName="iFarm", RowCounts="6", RequestID="0", EventSequence="1447598967", SessionLoginName="TcStandard", ServerName="DC1PRMSPADB40"  
Can Kaspersky Security Center with free license export syslog to Splunk. And if it can, how to configure a new file monitor input at forwarder to export syslog from Kaspersky Security Center?
Hi @Yashvik, very strange! as you can see it works on my Splunk did you exactly copied my search? Ciao. Giuseppe
@irom77 have you configured the outputs in the app's json file? https://docs.splunk.com/Documentation/SOARonprem/6.1.1/DevelopApps/Metadata#Action_Section:_Output 
Probably the easiest way is go back to situation when you have done fresh installation and everything is working. Then just add inputs one by one and see which one broke your environment.  This is a... See more...
Probably the easiest way is go back to situation when you have done fresh installation and everything is working. Then just add inputs one by one and see which one broke your environment.  This is annoying and long time taking process, but still I thing that this is the easiest way.
Hi Splunkers, I have to perform a UF config and I don't know if some problem could rise. Let me explain better. For a customer, we are collecting data from Windows Systems using UF. All selected log... See more...
Hi Splunkers, I have to perform a UF config and I don't know if some problem could rise. Let me explain better. For a customer, we are collecting data from Windows Systems using UF. All selected logs come fine. Now, we have to collect logs from Windows DNS query; they are collected in debug mode and, then, stored in a path. So, before any UF or Splunk action, the flow is: Win DNS set on debug mode -> Log forwarded on a server -> Logs stored on server's path. Due the high volume of collected store, on that server there are 2 scripts that follow a retention policy and, in a nutshell, delete logs older than 1 day. This because when DNS forward logs, write a file of maximum 500 MB and then another one is created. So, files are writed until threshold is reached. Due we want use UF to monitor that path, our customer asked us its behavior regarding file monitoring; his doubt is how UF works when monitoring file, expecially the current writing one. My knoledge is that UF should work exactely any other Data Input File & Directory monitoring: if we tell, in inputs.conf stanza, "monitor path X" it shuld simply monitor each file in a sequential manner; am I right?  
I performed all checks suggested and nothing seem to be wrong; after more than 1 day, logs start to come to cloud. My assumption is that some latency problems delayed log receiving and, after initial... See more...
I performed all checks suggested and nothing seem to be wrong; after more than 1 day, logs start to come to cloud. My assumption is that some latency problems delayed log receiving and, after initial burst, they start to come.
Hello @gcusello  Thanks for the response. Unfortunately, I see only empty values for sourcetype column.  other 3 fields showing the info. 
You need to read up Linux user management, or ask your SysAdmin how to determine such matters. Understandably, Windows user management is totally different Unix and Linux user management.  Unless yo... See more...
You need to read up Linux user management, or ask your SysAdmin how to determine such matters. Understandably, Windows user management is totally different Unix and Linux user management.  Unless your system uses some uncommon admin overlay (which only your SysAdmin can tell you), userdel command can only be executed by root (uid 0).  A non-root user may have sudo privileges to execute commands as root, but this can only be executed as sudo usserdel.  Alternatively, if unprivileged user is allowed root shell, such a user can first use sudo su <shell name> to gain a root shell, then execute userdel in this shell as if it is user root. Most modern Linux systems log full command history.  You didn't say which Linux OS you are using.  You say "(syslog) only shows the name of the user account that was deleted," but without any context like which source file are you looking at.  In Unix-like systems, "syslog" is a OS facility that can be organized in many different ways, i.e., various messages (events) can go to various places. (If you are unsure, ask your SysAdmin.)  You didn't even illustrate a sample log entry. (You can always anonymize; but make sure to preserve formatting and other characteristics.)  Volunteers cannot possibly help with all these ambiguities.
What @PickleRick is trying to say is that you should tell volunteers what "other eventTypes" mean, how their data look like.  I'd like to add Example - If my src_ip=73.09.52.00, then the src_ip sho... See more...
What @PickleRick is trying to say is that you should tell volunteers what "other eventTypes" mean, how their data look like.  I'd like to add Example - If my src_ip=73.09.52.00, then the src_ip should search the other available eventType and filter the result if the user_id=*idp* What does "filter the result" mean?  In many contexts, this phrase is commonly used to mean "to exclude results satisfying such and such."  But in your case, I have a suspicion that you mean the exact opposite. In addition to this question, you also fail to tell volunteers which data do you expect to include AFTER "filter the result"?  Are you interested only in fields from "other eventTypes"?  Only in fields from eventTypes security.threat.detected and security.internal.threat.detected?  Or some fields from eventTypes security.threat.detected and security.internal.threat.detected, some fields from "other eventTypes"?  Which ones? When you ask a question in a user forum, you need to give all and precise relevant information in terms of data, desired results, and the logic between data and desired logic, and not make volunteers take wild guesses.