All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Some additional details why it happens here:   https://community.splunk.com/t5/Getting-Data-In/splunk-winprintmon-exe-splunk-winPrintMon-monitorHost/m-p/680673
Here below the reply from Splunk support that contacted the Development team:   Please find the detailed explanation for the issue and the solution:   The logs and your inputs.conf excerpt indica... See more...
Here below the reply from Splunk support that contacted the Development team:   Please find the detailed explanation for the issue and the solution:   The logs and your inputs.conf excerpt indicate that the Splunk Universal Forwarder (UF) is indeed ignoring the interval parameter specified for the WinPrintMon modular input. This is explicitly stated in the log message: > Ignoring parameter "interval" for modular input "WinPrintMon" when scheduling the runtime for script     Why is this happening? The reason behind this behavior lies in how Splunk UF handles script-based modular inputs and their failures.   Script Failures and Retries: When the splunk-winprintmon script fails (in this case, due to the disabled Printer service), Splunk UF doesn't wait for the configured interval to retry. Instead, it attempts to restart the script almost immediately. This rapid retry mechanism is likely designed to ensure quick recovery from transient errors.    Interval Adherence on Success: The interval parameter is only respected when the script completes successfully. In other words, if the splunk-winprintmon script runs without errors, Splunk UF will wait for the specified 600 seconds before executing it again.The combination of script failures and the ignored interval leads to the observed high frequency of error messages in the _internal index (3 times per second). This can potentially overwhelm the Splunk indexer and impact overall system performance.   Solution:  Fix the Script Failure: The primary issue is the disabled Printer service causing the splunk-winprintmon script to fail.    * Enable the Printer service if print monitoring is required.    * Disable the WinPrintMon input in inputs.conf if print monitoring is not needed.   Splunk UF's behavior of ignoring the interval parameter for failing script-based modular inputs is by design. It prioritizes quick recovery from errors over strict adherence to the configured polling interval.   Understanding this behavior is crucial for troubleshooting and optimizing Splunk UF deployments, especially when dealing with inputs that might experience frequent failures. The error 0x800706ba indicates that the "RPC server is unavailable," which is due to the Printer service being disabled.  
Hello,    I would like to convert my hexadecimal code to a bit value based on this calculation.  Hex code - 0002 Seperate 2 bytes each  00/02 2 Byte bitmask Byte 0: HEX = 00 - 0000 0000 B... See more...
Hello,    I would like to convert my hexadecimal code to a bit value based on this calculation.  Hex code - 0002 Seperate 2 bytes each  00/02 2 Byte bitmask Byte 0: HEX = 00 - 0000 0000 Byte 1: HEX = 02 - 0000 0010 Byte 1 Byte0 - 0000 0010 0000 0000 calculate the non zero th position values from right side Byte combination  - 0000 0010 0000 0000 Position -                      9 8765  4321 At position 10, we got 1 while counting from right side. so the bit value is 9. I need to calculate this in splunk, where the HEX_Code is the value from the lookup. Thanks in Advance! Happy Splunking!
Yes @ITWhisperer  I tried, let me check my data again if something missing in events or try with the data you have generated. 
Where is "all" coming from? It is not shown in your source listing. Also, the depends block you have shown is not part of valid SimpleXML. How are you using the latest_product_id token?
What happens when you execute the restart command manually? Do you use the correct user in your ansible script? Maybe you have to set "become: tru"e if splunk runs under root.
Dear @PickleRick  I follow using this step https://www.fortinet.com/content/dam/fortinet/assets/alliances/Fortinet-Splunk-Deployment-Guide.pdf but there's not solve the issue.
Dear @PickleRick  Thankyou for your correction , do you have any sugestion or best practice for me ?   Regards, Dika
OK. This approach is wrong on many levels. 1. Receiving syslog directly on an indexer (or HF or UF) causes data loss whenever you need to restart that Splunk component. 2. When you're receiving sys... See more...
OK. This approach is wrong on many levels. 1. Receiving syslog directly on an indexer (or HF or UF) causes data loss whenever you need to restart that Splunk component. 2. When you're receiving syslog directly on Splunk, you lose at least some of the network-level metadata and you can't use that information to - for example - route events to different indexes or assign them different sourcetypes. Because of that you need to open multiple ports for separate types of sources. Which uses up resources and complicates the setup. 3. In order to receive syslog on a low port (514) Splunk would have to run as root. This is something you should _not_ be doing. Are you sure that input has even opened that port? 4. If you have two indexers (clustered or standalone?) and receive on only one of them, you're asking for data asymmetry.  
I found notfication : File Monitor Input Forwarder Ingestion Latency Ingestion Latency Large and Archive File Reader-0 Large and Archive File Reader-1 Real-time Reader-0 Real-time Reader-1 ... See more...
I found notfication : File Monitor Input Forwarder Ingestion Latency Ingestion Latency Large and Archive File Reader-0 Large and Archive File Reader-1 Real-time Reader-0 Real-time Reader-1 are reds too.  is it because too much logs sent from fortigate?
Did you even try my solution? Here is a runanywhere example showing it working with dummy data | makeresults count=100 | fields - _time | eval department="Department ".mvindex(split("ABCDE",""),ran... See more...
Did you even try my solution? Here is a runanywhere example showing it working with dummy data | makeresults count=100 | fields - _time | eval department="Department ".mvindex(split("ABCDE",""),random()%5) | eval version=round(random()%3,1) | eval thumb_print=random()%10 ``` The lines above create some dummy data and can be replaced by your index search ``` | dedup version thumb_print department | eval version=if(version="2.0","NEW_RUNS","OLD_RUNS") | chart count(thumb_print) by department version | fillnull value=0 | eval total=NEW_RUNS+OLD_RUNS | eval perc=round(100*NEW_RUNS/total,2) | eval department=substr(department, 1, 50) | table department OLD_RUNS NEW_RUNS perc | sort -perc
thanks @PaulPanther  @ITWhisperer   Issue is still there with Output department                     OLD_RUNS    NEW_RUNS   total  PERC -----------------------------------------------------------... See more...
thanks @PaulPanther  @ITWhisperer   Issue is still there with Output department                     OLD_RUNS    NEW_RUNS   total  PERC -------------------------------------------------------------------------- Department1                    10           0                 10  0% Department1                     0            20               20  100% Basically old and new count of same department not in same row so with respect to new runs all percentage comes as 100% as old runs shows as 0.   
If you are using Classic/SimpleXML dashboards, you can do this with CSS. For this you need to give your panel an id (so it gets tagged so CSS can select it), then you need to know the order of the s... See more...
If you are using Classic/SimpleXML dashboards, you can do this with CSS. For this you need to give your panel an id (so it gets tagged so CSS can select it), then you need to know the order of the series in the charts and they are numbered. For example, if you name your panel "panel_one", and your Total was the second series (index 1), you could do something like this <panel id="panel_one"> <html depends="$alwaysHide$"> <style> #panel_one svg g.highcharts-data-labels.highcharts-series-1 { display: none !important; } </style> </html> <chart>
Thank you below is splunkd.log    09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down level="ShutdownLevel_HttpClient" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Sh... See more...
Thank you below is splunkd.log    09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down level="ShutdownLevel_HttpClient" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down name="HttpClient" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down level="ShutdownLevel_DmcProxyHttpClient" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down level="ShutdownLevel_Duo2FAHttpClient" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down level="ShutdownLevel_S3ConnectionPoolManager" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down name="S3ConnectionPoolManager" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down level="ShutdownLevel_AwsSdk" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down name="loader" 09-20-2024 06:36:54.628 +0000 INFO Shutdown [2498 Shutdown] - Shutdown complete in 5.124 seconds 09-20-2024 06:36:54.629 +0000 INFO loader [2296 MainThread] - All pipelines finished.
Sorry, made a mistake with the calculation of totals. I adjusted the search in my previous answer.
Hi Paul, Thanks for help.  But this still has some issues. Output department                     OLD_RUNS    NEW_RUNS   total  PERC ---------------------------------------------------------... See more...
Hi Paul, Thanks for help.  But this still has some issues. Output department                     OLD_RUNS    NEW_RUNS   total  PERC -------------------------------------------------------------------------- Department1                    10           0                 10  0% Department1                     0            20               20  100% Basically old and new count of same department not in same row so with respect to new runs all percentage comes as 100% as old runs shows as 0.   
  Subsearches are limited to 50k events which is one of the issues with using joins. Also, your dedup seems to ignore whether more than one department has the same version and thumb_print (of course... See more...
  Subsearches are limited to 50k events which is one of the issues with using joins. Also, your dedup seems to ignore whether more than one department has the same version and thumb_print (of course, unless thumb_prints or versions are unique to department). Try something like this index=abc | dedup version thumb_print department | eval version=if(version="2.0","NEW_RUNS","OLD_RUNS") | chart count(thumb_print) by department version | fillnull value=0 | eval total=NEW_RUNS+OLD_RUNS | eval perc=round(100*NEW_RUNS/total,2) | eval department=substr(department, 1, 50) | table department OLD_RUNS NEW_RUNS perc | sort -perc
Forget the rest of search.  What do you get from the following?   index="logs" sourceip="x.x.x.x" OR destip="x.x.x.x" | lookup file.csv cidr AS sourceip OUTPUT provider AS sourceprovider, area AS s... See more...
Forget the rest of search.  What do you get from the following?   index="logs" sourceip="x.x.x.x" OR destip="x.x.x.x" | lookup file.csv cidr AS sourceip OUTPUT provider AS sourceprovider, area AS sourcearea, zone AS sourcezone , region AS sourceregion, cidr AS src_cidr | lookup file.csv cidr AS destip OUTPUT provider AS destprovider, area AS destarea, zone AS destzone, region AS destregion, cidr AS dest_cidr | table sourceip sourceprovider sourcearea sourcezone sourceregion src_cidr destip destprovider destarea destzone destregion dest_cidr   Is the output correct? Using your mock lookup data, I made the following emulation   | makeresults format=csv data="sourceip, destip 1.1.1.116,10.5.5.5 10.0.0.5,2.2.2.3 2.2.2.8, 1.1.1.90 192.168.8.1,10.6.0.10" ``` the above emulates index="logs" sourceip="x.x.x.x" OR destip="x.x.x.x" ``` | lookup file.csv cidr AS sourceip OUTPUT provider AS sourceprovider, area AS sourcearea, zone AS sourcezone , region AS sourceregion, cidr AS src_cidr | lookup file.csv cidr AS destip OUTPUT provider AS destprovider, area AS destarea, zone AS destzone, region AS destregion, cidr AS dest_cidr | fields sourceip sourceprovider sourcearea sourcezone sourceregion src_cidr destip destprovider destarea destzone destregion dest_cidr   This is what I get, exactly as expected sourceip sourceprovider sourcearea sourcezone sourceregion src_cidr destip destprovider destarea destzone destregion dest_cidr 1.1.1.116 Unit 1 Finance 2 1.1.1.1/24 10.5.5.5           10.0.0.5           2.2.2.3 Unit 2 HR 16 2.2.2.2/27 2.2.2.8 Unit 2 HR 16 2.2.2.2/27 1.1.1.90 Unit 1 Finance 2 1.1.1.1/24 192.168.8.1           10.6.0.10          
index=abc   | dedup version thumb_print | stats count(eval(if(version!="2.0",thumb_print,null()))) as OLD_RUNS count(eval(if(version="2.0",thumb_print,null()))) as NEW_RUNS by department | fillnu... See more...
index=abc   | dedup version thumb_print | stats count(eval(if(version!="2.0",thumb_print,null()))) as OLD_RUNS count(eval(if(version="2.0",thumb_print,null()))) as NEW_RUNS by department | fillnull value=0 | eval total=NEW_RUNS+OLD_RUNS | eval perc=((NEW_RUNS/total)*100) | eval department=substr(department, 1, 50) | eval perc=round(perc, 2) | sort -perc  
If you don't have GUI access to the remote searchhead you must ask your infra team. They should be able to confirm if the custom fields are configured on the remote searchhead.