All Topics

Top

All Topics

Hello, I have set a email alert. ID is the unique identifier my source file is text file which updates after some time whenever new activity is capture, Forwarder will re read that file again, ... See more...
Hello, I have set a email alert. ID is the unique identifier my source file is text file which updates after some time whenever new activity is capture, Forwarder will re read that file again, to avoid duplication of search im using dedup ID,  if I don't use dedup ID in my search it will show me numbers of result which is not equal to the file. For e.g: my file have 3 logs before after some activity 2 more logs added in file total count is 5, however splunk is showing 8 events in GUI. to avoid this im using dedup ID.  Now, the issue is my alert is on real time im getting alot duplicated results in my email. Below is my query index=pro sourcetype=logs Remark="xyz" | dedup ID | table ID, _time, field1. field2, field3, field4 using the above query im getting correct result on GUI but numbers of alerts generate on email.
Hello everyone,  In my splunk journey, I've to make a documentation for the installation of the Universal Forwarder. Ours Forwarders will be install VMs who are on a private network so we need some... See more...
Hello everyone,  In my splunk journey, I've to make a documentation for the installation of the Universal Forwarder. Ours Forwarders will be install VMs who are on a private network so we need some configuration on the network to let the Universal Forwarder to send data to the indexers splunk. Ours indexers are install on another private network, we created a rule on the network to receive data on the port 9997 of the Splunk server. I'm looking for network prerequisites before the installation of the fowarder. What rules we have to create on the Forwarder's network ? What port we have to open on the Forwarder's network ? Do we need to create a specific flow for the Forwarder to send data to the indexers? What protocol we have to setup on the Forwarder's network? Thank for all who read me,
Hi, We are using a Splunk hybrid environment , with Splunk HF on Splunk enterprise , indexers and search heads  on Splunk Cloud. I have installed and configured the Qualys TA addon on Splunk HF... See more...
Hi, We are using a Splunk hybrid environment , with Splunk HF on Splunk enterprise , indexers and search heads  on Splunk Cloud. I have installed and configured the Qualys TA addon on Splunk HF and ingesting the data to Splunk Cloud. But the Qualys apps are supported only on Splunk Enterprise and not Splunk Cloud. Is there a way to get the dashboards on Splunk Cloud? Can someone please help.
Hello to everyone! I have many FlexEngine.log files in different directories that are ingested by Splunk UF 9.0.8 The path from logs is network share on the Windows Server, in which client-side app... See more...
Hello to everyone! I have many FlexEngine.log files in different directories that are ingested by Splunk UF 9.0.8 The path from logs is network share on the Windows Server, in which client-side application write via SMB Some files are ingested without errors, but others have errors that you can see below: 03-18-2024 11:39:23.852 +0300 ERROR TailReader [10000 tailreader0] - error from read call from 'L:\App\UEM\CB\UserSettings\username\FlexEngine.log'. 03-18-2024 11:39:27.839 +0300 WARN FileClassifierManager [10000 tailreader0] - Unable to open 'L:\App\UEM\CB\UserSettings\username\FlexEngine.log'. 03-18-2024 11:39:27.839 +0300 WARN FileClassifierManager [10000 tailreader0] - The file 'L:\App\UEM\CB\UserSettings\username\FlexEngine.log' is invalid. Reason: cannot_open.   inputs.conf looks like: [monitor://L:\App\UEM\CB\UserSettings\*\FlexEngine.log] disabled = false index = dem sourcetype = dem_file_log   and this is an example of a file: 2024-03-18 07:01:32.889 [INFO ] Starting FlexEngine v9.9.0.905 [IFP#14d600e0-T5>>] 2024-03-18 07:01:32.889 [INFO ] Running as Group Policy client-side extension 2024-03-18 07:01:32.889 [INFO ] Performing path-based import 2024-03-18 07:01:32.890 [DEBUG] User: domain\username, Computer: ComputerName, OS: x64-win10 (Version 1809, BuildNumber 17763.5329, SuiteMask 100, ProductType 1/7d, Lang 0419, IE 11.1790.17763.0, VMware VDM 7.12.0, App Volumes 2.18.6.24, DEM 9.9.0.905, ProcInfo 1/1/2/2, UTC+03:00N), PTS: 6108/2768/1CT 2024-03-18 07:01:32.890 [DEBUG] Profile state: local (0x00000204) 2024-03-18 07:01:32.890 [DEBUG] Recursively processing config files from path '\\domain\app\UEM\CB\Settings\general' 2024-03-18 07:01:32.890 [DEBUG] Using profile archive path '\\domain\app\UEM\CB\UserSettings\username' 2024-03-18 07:01:32.890 [DEBUG] Last modified dates will be restored 2024-03-18 07:01:32.890 [DEBUG] Logging to file '\\domain\app\UEM\CB\UserSettings\username\FlexEngine.log' 2024-03-18 07:01:32.890 [DEBUG] Log file will be overwritten when larger than 512 kilobytes Which problems can lead to these errors? Can it be file-blocking by a client-side app, or must Splunk UF handle this situation?
Dears,   I'm trying to filter out XML formatted events and below is sample event and REGEX which we used: Sample Events: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><... See more...
Dears,   I'm trying to filter out XML formatted events and below is sample event and REGEX which we used: Sample Events: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-A5BA-3E3B0328C30D}'/><EventID>4624</EventID><Version>1</Version><Level>0</Level><Task>12544</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2024-03-18T07:29:59.988001100Z'/><EventRecordID>11295805761</EventRecordID><Correlation/><Execution ProcessID='796' ThreadID='25576'/><Channel>Security</Channel><Computer>DC01.XXXX.COM</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>NULL SID</Data><Data Name='SubjectUserName'>-</Data><Data Name='SubjectDomainName'>-</Data><Data Name='SubjectLogonId'>0x0</Data><Data Name='TargetUserSid'>UCXXX\XXXDSOD02$</Data><Data Name='TargetUserName'>XXXDSOD02$</Data><Data Name='TargetDomainName'>UCXXX</Data><Data Name='TargetLogonId'>0x13443956d5</Data><Data Name='LogonType'>3</Data><Data Name='LogonProcessName'>Kerberos</Data><Data Name='AuthenticationPackageName'>Kerberos</Data><Data Name='WorkstationName'>-</Data><Data Name='LogonGuid'>{5517AA4A-D860-6053-03FD-1FE752FC995B}</Data><Data Name='TransmittedServices'>-</Data><Data Name='LmPackageName'>-</Data><Data Name='KeyLength'>0</Data><Data Name='ProcessId'>0x0</Data><Data Name='ProcessName'>-</Data><Data Name='IpAddress'>172.X.X.73</Data><Data Name='IpPort'>53681</Data><Data Name='ImpersonationLevel'>%%1833</Data></EventData></Event>   Regex Implemented in inputs.conf file: blacklist10 = EventCode="4624" Message="SubjectUserSid:\s+(NULL SID)" blacklist11 = $xmlRegex="\<EventID\>4624.*\'SubjectUserSid\'\>NULL\sSID\<.+SubjectUserName\'\>\-\<.+SubjectDomainName\'\>\-\<.+SubjectLogonId\'\>0x0\<" blacklist12 = EventCode="4624" WorkstationName="-" Props.conf: TRANSFORMS-null=setnull Transforms.conf: [setnull] SOURCE_KEY = _raw REGEX = (\<EventID\>4624.+\'SubjectUserSid\'\>NULL\sSID\<.+SubjectUserName\'\>\-\<.+SubjectDomainName\'\>\-\<.+SubjectLogonId\'\>0x0\<) DEST_KEY = queue FORMAT = nullQueue Please suggest if you have solution for this. Thanks, Suraj    
Hi, Im a novice to Splunk and i have a question regarding visualization. I have my query like this:     |...myBaseQuery | chart c as "Count" by category     This results in me only having on... See more...
Hi, Im a novice to Splunk and i have a question regarding visualization. I have my query like this:     |...myBaseQuery | chart c as "Count" by category     This results in me only having one legend in my visualization, "Count". I was wondering if there's any way to get the all the values as a legend on the right (see image) ? I realized this is possible when i also use the retailUnit in the chart command:     |...myBaseQuery | chart c as "Count" by retailUnit category     Then I get one label for each category (see image), but i want to achieve this without sorting on retailUnit. Is this possible?
Hi, Can someone assist me with breaking the following log data into separate events in the props.conf? Each event should start with: { "Timestamp": "xxxxxxxxxxxx" And ends with: } See belo... See more...
Hi, Can someone assist me with breaking the following log data into separate events in the props.conf? Each event should start with: { "Timestamp": "xxxxxxxxxxxx" And ends with: } See below log detail which should be split into two events. { "Timestamp": "2024-03-18T07:25:32.208+00:00", "Level": "ERR", "Message": "Validation failed: \n -- ProductId: 'Product Id' must be greater than '0'. Severity: Error", "Properties": { "RequestId": "0HJYGFTHJK:00000003", "RequestPath": "/apps/-7/details", "CorrelationId": "87hjg76-gh678-77h7-ll98-pu7nsb67w567w", "ConnectionId": "KJUY686GT", "MachineName": "kic-aiy-tst-heaps-tst-6h6hfjk-980jk", "SolutionName": "Kic AIY - Test", "Environment": "test", "LoggerName": "Kic AIY - Test", "ApplicationName": "Kic AIY - Test", "ThreadId": "1", "ProcessId": "1", "ProcessUserId": "root", "SiteName": "Kic AIY - Test" }, "Exception": { "ExceptionSource": "Api.Utilities", "ExceptionType": "FluentValidation.ValidationException", "ExceptionMessage": "Validation failed: \n -- ProductId: 'Product Id' must be greater than '0'. Severity: Error", "StackTrace": " at Api.Utilities.Behaviours.ValidationBehavior`2.Handle(TRequest request, RequestHandlerDelegate`1 next, CancellationToken cancellationToken)," "FileName": null, "MethodName": "Api.Utilities.Behaviours.ValidationBehavior`2+<Handle>d__2", "Line": 0, "Data": null }, "RequestBody": null, "Additional": null } { "Timestamp": "2024-03-18T07:15:04.259+00:00", "Level": "ERR", "Message": "Validation failed: \n -- ProductId: 'Product Id' must be greater than '0'. Severity: Error", "Properties": { "RequestId": "0HJYGFTRJK:00000004", "RequestPath": "/apps/-7/details", "CorrelationId": "87hjg76-gh878-77h7-ll98-ku7nsb67w567w", "ConnectionId": "KJUY686GT", "MachineName": "kic-aiy-ts2t-heaps-tst2-6h6hfjk-980jk", "SolutionName": "Kic AIY - Test2", "Environment": "test", "LoggerName": "Kic AIY - Test2", "ApplicationName": "Kic AIY - Test2", "ThreadId": "1", "ProcessId": "1", "ProcessUserId": "root", "SiteName": "Kic AIY - Test" }, "Exception": { "ExceptionSource": "Api.Utilities", "ExceptionType": "FluentValidation.ValidationException", "ExceptionMessage": "Validation failed: \n -- ProductId: 'Product Id' must be greater than '0'. Severity: Error", "StackTrace": " at Api.Utilities.Behaviours.ValidationBehavior`2.Handle(TRequest request, RequestHandlerDelegate`1 next, CancellationToken cancellationToken)," "FileName": null, "MethodName": "Api.Utilities.Behaviours.ValidationBehavior`2+<Handle>d__2", "Line": 0, "Data": null }, "RequestBody": null, "Additional": null }
Hi! I  have a dashboard with 10 columns, If value of column 1 and column 2 is different I have to make it as red color, Similarly I have to do it for column 3-4, 5-6, 7-8, 9-10. I am u... See more...
Hi! I  have a dashboard with 10 columns, If value of column 1 and column 2 is different I have to make it as red color, Similarly I have to do it for column 3-4, 5-6, 7-8, 9-10. I am using below code but it marking whole column red.(I only need red color for values which is different.) <format type="color" field="Storenumber" > <colorPalette type="expression"> if (Storeid!=Storenumber,"#53A051","#DC4E41")</colorPalette>  
Hye ! I am trying to analyze Windoes firewall logs in splunk Enterprsie locally hosted . Follwings have ben done already: Logs are being ingested successfully to server Can view logs with details... See more...
Hye ! I am trying to analyze Windoes firewall logs in splunk Enterprsie locally hosted . Follwings have ben done already: Logs are being ingested successfully to server Can view logs with details App TA-winfw already installed  However its missing any IP realetd info like src ip , dst ip and protocol. However I can see these fileds in local file stored at "C:\Windows\System32\LogFiles\Firewall\pfirewall.log" But dont see any such values into splunk ingested log data . Need help and guidance if I am missing anything ? Regards    
| tstats allow_old_summaries=true summariesonly=t values(Web.dest_ip) as dest_ip, values(Web.http_referrer) as http_referrer, values(Web.http_user_agent) as http_user_agent, values(Web.url) as url, v... See more...
| tstats allow_old_summaries=true summariesonly=t values(Web.dest_ip) as dest_ip, values(Web.http_referrer) as http_referrer, values(Web.http_user_agent) as http_user_agent, values(Web.url) as url, values(Web.user) as src_user from datamodel=Web where (Web.src=* OR Web.url=*) by _time Web.src, Web.url | `drop_dm_object_name("Web")` | rename Web.src as src_host | regex url= "^((?i)https?:\/\/)?\w{2,4}\.\w{2,6}:8080\/[a-zA-Z0-9]+\/.*?(?:-|\=|\?)" | append [search index=audit_digitalguardian sourcetype=digitalguardian:process Application_Full_Name=msiexec.exe Command_Line="*:8080*" src_host="raspberryrobin.local" | stats values(index) as index, values(sourcetype) as sourcetype, values(Command_Line) as cmdline, values(_raw) as payload by _time, src_host, url]
Selected fields in splunk UI are not getting saved, each time again we need to select the fields once logging again to splunk UI.
Need help sorting out the issue that I'm having with the lookup editor. I have successfully uploaded the csv into Splunk via the lookup editor. It shows up correctly when I run  | inputlookup sam... See more...
Need help sorting out the issue that I'm having with the lookup editor. I have successfully uploaded the csv into Splunk via the lookup editor. It shows up correctly when I run  | inputlookup sample.csv. But when I check in the lookup editor, all the column fields are merged into one. It's showing incorrectly. I have to edit the lookup and need this to be fixed. Has anyone experienced this issue before? Thanks
In a perfect world I'd find a way to get this into the time picker, but I haven't seen suggestions for that (please warn me if I've missed something). Q:  Is the solution I've found for dealing w... See more...
In a perfect world I'd find a way to get this into the time picker, but I haven't seen suggestions for that (please warn me if I've missed something). Q:  Is the solution I've found for dealing with previous business       day workable or have I missed an edge case that people have      seen before (e.g., it blows up in cron)? Thanks I'm trying to find some way to evaluate a window time during a business week. Goal is having a dashboard w/ drilldown to the previous business day (for comparison to the main graph giving today's data). This means processing last Friday on Monday. The basic question has been asked any number of times but the answers vary in complexity. The simplest approach I could find was using a 3-day window in the time picker and then adding an earliest/latest value via sub-select to limit the data: https://community.splunk.com/t5/Splunk-Search/How-to-to-dynamically-change-earliest-amp-latest-in-subsearch-to/m-p/631220 The approach of: <your index search> [ search index=summary source="summaryName" sourcetype=stash search_name="summaryName field1=* | stats count by _time | streamstats window=2 range(_time) as interval | where interval > 60 * 15 | eval earliest=_time-interval+900, latest=_time | fields earliest latest ] Seems simple enough: Generate an earliest/latest based on the weekday. Applying this to my specific case of business hours during the business week I get this with a case on the weekday from makeresults, which at least seems like a lightweight solution: index="foo" [ | makeresults | eval wkday = strftime( _time, "%a" ) | eval earliest = case( wkday = "Mon", "-3d@d+8h", wkday = "Sun", "-2d@d+8h", wkday = "Sat", "-1d@d+8h", 1=1, "@d+8h" ) | eval latest = case( wkday = "Mon", "-3d@d+17h", wkday = "Sun", "-2d@d+17h", wkday = "Sat", "-1d@d+17h", 1=1, "@d+17h" ) | fields earliest latest ] | stats earliest( _time ) as prior latest( _time ) as after | eval prior = strftime( prior, "%Y.%m.%d %H:%M:%S" ) | eval after = strftime( after, "%Y.%m.%d %H:%M:%S" ) | table prior after And even seems to work: on Sunday the 17th I get: prior                                    after 2024.03.15 08:00:00 2024.03.15 16:59:59 Only question now is whether there is some edge case I've missed (e.g., running via crontab) where the makeresults will generate an offball time or something. Thanks
I'm currently trying to create a search head cluster for two search head servers while configuring the deployer server. [Environment Description] On Search Head Server 1 (10.10.10.5), there are tw... See more...
I'm currently trying to create a search head cluster for two search head servers while configuring the deployer server. [Environment Description] On Search Head Server 1 (10.10.10.5), there are two Splunk daemons installed as follows: 1) Search Head (SH)    Path: /opt/splunk_sh     // I'm going to designate this daemon as a deployer member. 2) Indexer Cluster Master (CM)    Path: /opt/splunk_cm At this point, the account running each daemon on Search Head Server 1 is 'splunk', which is the same. On Search Head Server 2 (10.10.10.6), there is one Splunk daemon installed: 1) Search Head (SH)    Path: /opt/splunk_sh    // I intend to set this daemon as both a deployer member and a search head captain. Deployer Server (10.10.10.9) 1) Search Head Deploy    Path: /opt/splunk So, with two search head servers and a deployer server in place, when I tried to configure the member settings on Search Head Server 1, I encountered the following error after entering the command: [Command] /opt/splunk_sh/bin/splunk init shcluster-config -auth <admin:adminpw> -mgmt_uri https://10.10.10.5:8089 -replication_port 8080 -replication_factor 2 -conf_deploy_fetch_url https://10.10.10.9:8089 -secret <<pw>> [Command Result] WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Can't create directory "/opt/splunk/.splunk": No such file or directory Please ignore the WARNING as I haven't properly configured the SSL certificate files yet. The problem below is that I'm having difficulty setting the splunk_home path correctly, as indicated by the question title. While searching through community posts, I tried the following but it didn't work out: Attempt 1) Setting /opt/splunk_sh/etc/splunk-launch.conf I've already set SPLUNK_HOME=/opt/splunk_sh in this conf file when installing the two daemons. Now, I'm not sure what to do next. Please help me out.
Hey Everyone, I would like to build a dashboard or use any pre-defined one in order to collect all the details of the SOAR platform and to present them in a summary report of how many active playboo... See more...
Hey Everyone, I would like to build a dashboard or use any pre-defined one in order to collect all the details of the SOAR platform and to present them in a summary report of how many active playbooks have been run and further information about successful actions and failed activities. Are there any apps that can assist with the creation of such a dashboard or any suggestions on how to do it? i know there is one on SOAR to use, but need to build this on splunk dashboard and not using SOAR itself   thanks, Efi.
I was recently working on Splunk Enterprise security to have a forwarder installed on the Linux machine and display it on the server. While working on this, I noticed that indexer search option is in... See more...
I was recently working on Splunk Enterprise security to have a forwarder installed on the Linux machine and display it on the server. While working on this, I noticed that indexer search option is in red status. So , I went ahead and enabled the suggestion the system was asking. After that th server asked for a restart and now, it won't come up online. Could any one help here please? below is the log when I run Splunk start Done [ OK ] Waiting for web server at https://127.0.0.1:8000 to be available.............. WARNING: web interface does not seem to be available! Further in the file: /opt/splunk/var/log/splunk/splunkd.log This is what I see -  03-17-2024 12:10:19.240 +0000 ERROR ClusteringMgr [33823 MainThread] - pass4SymmKey setting in the clustering or general stanza of server.conf is set to empty or the default value. You must change it to a different value. 03-17-2024 12:10:19.242 +0000 ERROR loader [33823 MainThread] - clustering initialization failed; won't start splunkd I changed the pass4symmkey and it did not help. Could any one help here please?
I'm trying to use the Splunk App for SOAR to forward logs and events from SOAR to Splunk Enterprise. The servers seem to be connected (test connectivity works) but the data (events, playbook runs et... See more...
I'm trying to use the Splunk App for SOAR to forward logs and events from SOAR to Splunk Enterprise. The servers seem to be connected (test connectivity works) but the data (events, playbook runs etc.) isn't being indexed and doesn't appear in search in Splunk. I tried reindexing the data through SOAR but it didn't work. Adding audit input in the app is working fine, but data isn't being indexed in real time according to the supposed indexes (I did create them using the "Create Indexes" button in the app) Did anyone experience anything similar or has any idea as to what might be the issue?
Hi All, I currently have a primary standalone Enterprise Security (ES) search head located in the main data center. Every day, a cronjob is executed to copy the entire /opt/splunk/etc/apps directory... See more...
Hi All, I currently have a primary standalone Enterprise Security (ES) search head located in the main data center. Every day, a cronjob is executed to copy the entire /opt/splunk/etc/apps directory to the secondary standalone Enterprise Security search head, which is located in the DR site. Now, the question arises: should I also copy the primary KVStore data, located in the var/lib directory, to the secondary ES search head? Currently, I'm only syncing the apps folder and not the var/lib directory. In the event of an issue with the primary search head in the future, I plan to bring up the secondary search head. Will there be any issues with the KVStore data if I'm not syncing the var/lib directory between the primary and secondary search heads? Note :Since we're not using any custom-made KVStore lookups and only depend on the default ones generated by different Enterprise Security apps, it makes us wonder if syncing the var/lib directory between the primary and secondary search heads is essential. Regards VK
|mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes Form the above query i am getting the results of service and application_codes. But my requirement is to ge... See more...
|mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes Form the above query i am getting the results of service and application_codes. But my requirement is to get the application_codes from a csv file and  from only type=error1 below is the csv file application_codes Description Type 0 error descp 1 error1 10 error descp 2 error2 10870 error descp 3 error3 1206 error descp 1 error1 11 error descp 3 error3 17 error descp 2 error2 18 error descp 1 error1 14 error descp 2 error2 1729 error descp 1 error1    
Hi all, I have installed and configured  fortiweb for splunk app. The problem is that the time in the log is correct, but the time I receive in the Splunk time column is 7 hours different. It should... See more...
Hi all, I have installed and configured  fortiweb for splunk app. The problem is that the time in the log is correct, but the time I receive in the Splunk time column is 7 hours different. It should be mentioned that there is a field in the logs called timezone_dayst that it differs from my time zone by exactly 7 hours. I also added TZ = MyTimeZone to the props.conf of the app but problem still exists. For example, in the image below, it can be seen that the time is equal to 8:37, while the log time is equal to 1:07, and of course timezone_dayst has a drift (-3:30 instead of +3:30).    Any ideas are appreciated.