Thanks @simon21. This worked a treat. To go one step further - if you are using classic dashboard, you can simply put the css in the source code. Create a new hidden row by using depends="$alwaysHi...
See more...
Thanks @simon21. This worked a treat. To go one step further - if you are using classic dashboard, you can simply put the css in the source code. Create a new hidden row by using depends="$alwaysHideCSSStyle$" . Then put the css code within <panel> -> <html> -> <style> tags <row depends="$alwaysHideCSSStyle$"> <panel> <html> <style> div.leaflet-popup-content tr:first-child { display: none; } div.leaflet-popup-content tr:nth-child(2) { display: none; } </style> </html> </panel> </row>
@VatsalJagani / @livehybrid https://apps.splunk.com/app/1780/.-Does this EXOS app still help in parsing? or is it outdated one? Is EXOS an Extreme old operating system?
@Anders333 Is it possible for you to configure the app to use standard log rotation (e.g., rename and create a new file when full, or truncate/append)? If you continue current log rotation, Splunk...
See more...
@Anders333 Is it possible for you to configure the app to use standard log rotation (e.g., rename and create a new file when full, or truncate/append)? If you continue current log rotation, Splunk may miss or duplicate events, and reliable ingestion cannot be guaranteed. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @Anders333 I think the main issue here is that it starts overwriting events from the top of the file, I believe this is a pretty unusual approach as you will end up with events in a strange orde...
See more...
Hi @Anders333 I think the main issue here is that it starts overwriting events from the top of the file, I believe this is a pretty unusual approach as you will end up with events in a strange order within the file e.g. 17/Jun/2025 09:08 - Event 5 17/Jun/2025 09:10 - Event 6 17/Jun/2025 09:01 - Event 1 17/Jun/2025 09:03 - Event 2 17/Jun/2025 09:05 - Event 3 17/Jun/2025 09:06 - Event 4 The issue here is even if you can convince Splunk to start reading the events again from the top of the file, it may end up re-ingesting events 1-4. Is there any way you can reconfigure the output of your app to log differently? e.g. rotate into new log file? Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Emre , Splunk displays the logs that it receives, are you sure that you are sending these data to Splunk? Do you see these data in the raw logs in Splunk? Maybe the issue is the not correct pa...
See more...
Hi @Emre , Splunk displays the logs that it receives, are you sure that you are sending these data to Splunk? Do you see these data in the raw logs in Splunk? Maybe the issue is the not correct parsing, see at https://docs.mendix.com/developerportal/operate/splunk-metrics/ to be guided. Ciao. Giuseppe
Hi @Emre Yes, you can send JSON via HEC into Splunk Enterprise / Splunk Cloud. Check out https://docs.splunk.com/Documentation/Splunk/9.4.2/Data/HECExamples which has some good examples on how you ...
See more...
Hi @Emre Yes, you can send JSON via HEC into Splunk Enterprise / Splunk Cloud. Check out https://docs.splunk.com/Documentation/Splunk/9.4.2/Data/HECExamples which has some good examples on how you can do this, but at a basic level you have two options, you can send raw JSON to https://mysplunkserver.example.com:8088/services/collector/raw or you can send structured events to https://mysplunkserver.example.com:8088/services/collector/event A structured even for the /event endpoint would look something like this: {
"time": 1426279439, // epoch time
"host": "localhost",
"source": "random-data-generator",
"sourcetype": "my_sample_data",
"index": "main",
"event": "Hello world!" // or {"yourKey":"yourVal"} for example
} Check out https://docs.splunk.com/Documentation/Splunk/9.4.2/Data/FormateventsforHTTPEventCollector for more info on field you can send to events to HEC. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Anders333 , No I said, that if you write the same log file, Splunk doesn't read it again. But, to better understand your issue what's the behavior of your ingestion? Ciao. Giuseppe
Perhaps the reason you are struggling is because you have painted yourself into a corner; try taking a step back. How did you get to the position of have 2 multi-value fields in the first place. Perh...
See more...
Perhaps the reason you are struggling is because you have painted yourself into a corner; try taking a step back. How did you get to the position of have 2 multi-value fields in the first place. Perhaps there is another way to create the table so that you don't lose the correlation between instance name and the execution time. Please share some anonymised sample events and the search that you are using to create the table in the first place.
I would suggest a slightly optimal version that does not use the subsearch index=abc
| stats count by source
| inputlookup append=t my_source_lookup.csv
| fillnull count
| stats sum(count) AS total...
See more...
I would suggest a slightly optimal version that does not use the subsearch index=abc
| stats count by source
| inputlookup append=t my_source_lookup.csv
| fillnull count
| stats sum(count) AS total BY source
The internal issue of overriding and potentially losing logs is not a problem for my application, but thanks for the heads up. Are you saying that Splunk is not able to detect that the application s...
See more...
The internal issue of overriding and potentially losing logs is not a problem for my application, but thanks for the heads up. Are you saying that Splunk is not able to detect that the application starts writing at the beginning of file and continues to checksum at eof?
Thanks @gcusello , I already implemented the mendix documentation. It sends some data to Splunk. And i use HEC method. But i would like to send some spesicifc data. for example Http status or Late...
See more...
Thanks @gcusello , I already implemented the mendix documentation. It sends some data to Splunk. And i use HEC method. But i would like to send some spesicifc data. for example Http status or Latest error message. In mendix i create logs and i added those values inside. But how do i display or get this information in Splunk. I only see some values such as hostname or level..
Hi @Emre , Splunk has many ways to ingest logs: syslog, HEC, API, etc..., which way can be implemented on Mendix? anyway, see at https://docs.mendix.com/developerportal/operate/splunk-metrics/ and ...
See more...
Hi @Emre , Splunk has many ways to ingest logs: syslog, HEC, API, etc..., which way can be implemented on Mendix? anyway, see at https://docs.mendix.com/developerportal/operate/splunk-metrics/ and you should find the solution. Ciao. Giuseppe
Hi @Anders333 , which kind of fail are you reporting? your situation has an internal issue: the log is checked by Splunk every few seconds, but if the rotation overrides the file before reading, y...
See more...
Hi @Anders333 , which kind of fail are you reporting? your situation has an internal issue: the log is checked by Splunk every few seconds, but if the rotation overrides the file before reading, you lose the last logs. then, if the content is always the same (first 256 chars by default) Splunk doesn't read twice the file. Ciao. Giuseppe
Hi @av3rag3 , at first, don't use the search command after the main search because you'll have slower searches: put all the search terms as left as possible, possibly in the main search. Then, why ...
See more...
Hi @av3rag3 , at first, don't use the search command after the main search because you'll have slower searches: put all the search terms as left as possible, possibly in the main search. Then, why do you use the source as BY clausein stats command, if you always have only one source? In general, without the condition source="xyz", it's normal that you haven't the results of source=0 because you don't have them from the search. If you have a list of the sources to monitor, you could insert them in a lookup and add them to the search with count=0, something like this: index=abc
| stats count by source
| append [ | inputlookup my_source_lookup.csv | eval count=0 | fields source count ]
| stats sum(count) AS total BY source Ciao. Giuseppe
Hi everyone, I am a Mendix developer and i would like to implementSplunkCloud for monitoring. I already have the HEC token port and hostname in my Mendix cloud environment. I would like to send er...
See more...
Hi everyone, I am a Mendix developer and i would like to implementSplunkCloud for monitoring. I already have the HEC token port and hostname in my Mendix cloud environment. I would like to send error logs to SplunkCloud from Mx. Based on my research JSON format is a common practice. Is there any way where i can send my data to Splunk as a JSON format? Idk how that works for Splunk. Any suggestions?
Hello, I have a Windows machine with an UF installed that logs various logs such as wineventlog. These logs work correctly and are ingested into Splunk, and have for some time. I wanted to add a new...
See more...
Hello, I have a Windows machine with an UF installed that logs various logs such as wineventlog. These logs work correctly and are ingested into Splunk, and have for some time. I wanted to add a new log from a Software that runs on the machine and added it to the the input.conf file. The log is a tracelog for the software and is seen added to monitoring in the _internal index with no errors. The log is ingested correctly initially in batch input, but the UF fails to monitor the file afterwards. The log is a a fixed size of 50MB and once the log is full it will start overwriting the oldest event in the log, meaning it will start at the top. I have already tried: change the initCrcLength change the ignoreOlderThan Set NO_BINARY_CHECK = true - this fixed some previous errors where Splunk believed the file to be binary, it's just Ansi encoded. Sett alwaysOpenFile = true - this did not seem to change anything. Thanks in advance for any tips, tricks or advice.
Hello, with this query : index=abc | search source = "xyz" | stats count by source I can see the count of sources having count more than 0. But I cant manage to get the ones with 0 count. An...
See more...
Hello, with this query : index=abc | search source = "xyz" | stats count by source I can see the count of sources having count more than 0. But I cant manage to get the ones with 0 count. Anyone able to help me please ? Thank you