It is not clear what it is you actually want. For example, do you want an hourly average of the humidity for each hour, then the minimum and maximum average for that hour over your full time period, ...
See more...
It is not clear what it is you actually want. For example, do you want an hourly average of the humidity for each hour, then the minimum and maximum average for that hour over your full time period, then discount events which are outside the minimum and maximum average for the hour they are taken. Or do you want the average for the day and then the minimum and maximum over the full time period and discount events which are outside this daily average. Or do you want the average over the whole time period and discount values which are more than a specified distance from the average. All of these would have different SPL. Please explain what you are trying to do in non-SPL terms.
I am working on a dashboard that has a bunch of field and will be used by multiple teams and people who will be needing different fields from the table. Is there anyway to add a toggle or filter or...
See more...
I am working on a dashboard that has a bunch of field and will be used by multiple teams and people who will be needing different fields from the table. Is there anyway to add a toggle or filter or anything similar to give a couple of presets (ex fields A D E H to preset 1 for team 1, fields B C D F G to preset 2 for team 2 and so on) I also use filters on fields in the dashboard table as well if possible i would want hiding of the field to not impact the filters at all. Thanks in advance.
Yes, unfortunately I realised that. Let's say it is a lack that can be useful to make the names inside the boxes or panels more readable and make the whole dashboard graphics look more trendy. I open...
See more...
Yes, unfortunately I realised that. Let's say it is a lack that can be useful to make the names inside the boxes or panels more readable and make the whole dashboard graphics look more trendy. I opened a post on splunk ideas for this missing functionality.
Palo introduced HTTP Event stream in OS 8.x, so if you have anything recent install it should support that as outbound log streaming. Alternatively the logs can be exported over syslog but becomes i...
See more...
Palo introduced HTTP Event stream in OS 8.x, so if you have anything recent install it should support that as outbound log streaming. Alternatively the logs can be exported over syslog but becomes infinitely more difficult ingest if you have a novice Splunk experience. Once you can export from Palo the HTTP Event stream then you need to setup your Splunk instance to collect HEC/HTTP Event Collection and there is a lot of documentation on how to do that. Warning: Palo can generate a tremendous amount of logs and almost certainly exceeds your trial license capacity.
You haven't mentioned anything about which OS specifically and what else is or may be using resources. Since your system exceeds minimum recommendations I would look for the total package. You may ...
See more...
You haven't mentioned anything about which OS specifically and what else is or may be using resources. Since your system exceeds minimum recommendations I would look for the total package. You may need an OS expert and not a Splunk expert to help track this down.
So just to close the loop- after some deep diving by a splunk support rep the issues that caused me to not see all the jobs on that page were due to: 1- permission levels- I was not an ADMIN and so ...
See more...
So just to close the loop- after some deep diving by a splunk support rep the issues that caused me to not see all the jobs on that page were due to: 1- permission levels- I was not an ADMIN and so several private jobs were not visible to me 2- incorrect expectations on how the page should work. I assumed it should show all jobs from x amount of time, regardless of how often a job runs or any other job attributes. The page was intended be used real time and not so much for historical runs like from a month ago. 3- TTL - each job has a life expectancy that determines how long its visible on that page. The calculations are convoluted and not obvious. So some jobs might show for longer vs others depending on various things like scheduled search should adhere to that dispatch.ttl adhoc search should default to 10 minutes (no matter how long you are searching back alert should have a TTL that depends on the action that it is taking 4- a bug that created a 2025 expiry date on the TTL for some of my searches/jobs which contributed to the confusion as to why some jobs show and others dont.- Rep was unable to determine cause of the 2025 bug.
Text align is not currently an option with Markdown boxes in Dashboard Studio. Here is a link to what is possible. https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/chartsText#Source_opt...
See more...
Text align is not currently an option with Markdown boxes in Dashboard Studio. Here is a link to what is possible. https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/chartsText#Source_options_for_Markdown
Another option perhaps closer to what you seek is to have each input set a token with the appropriate query string. Then the search will just invoke that token. For instance, if "banana" is selecte...
See more...
Another option perhaps closer to what you seek is to have each input set a token with the appropriate query string. Then the search will just invoke that token. For instance, if "banana" is selected, then the input token's <change> element might set a token called $query$ to what is needed to search for fruit. The <query> element then becomes simply <query>$query$</query>
No worries, I suppose your attempts at English are better than my Japanese. Loopback is a name for a virtual network interface that every networked host has - it's an interface used by software to ...
See more...
No worries, I suppose your attempts at English are better than my Japanese. Loopback is a name for a virtual network interface that every networked host has - it's an interface used by software to talk to other components on the same host (that's the one having 127.0.0.1 address). Anyway. TCP 0.0.0.0:8000 0.0.0.0:0 LISTENING This line says that your port 8000 is listening on 0.0.0.0, which means that it should be reachable from everywhere. (if the connections are not filtered on other layers). So you have to check your windows firewall - as far as I remember windows server by default blocks pretty much of incoming communication so you might need to create a rule to open traffic from the network to local 8000 port.
Hello everyone, I have built a dashboard with dashboard studio but in the panels I have noticed that you can use many properties but you cannot change the position of the markdown text. I have alre...
See more...
Hello everyone, I have built a dashboard with dashboard studio but in the panels I have noticed that you can use many properties but you cannot change the position of the markdown text. I have already tried to see the documentation but to no avail (maybe I am missing something). By changing position I also mean simply aligning the panel text left,centre,right inside. Do you have any ideas? Thank you, biwanari
Hi @PickleRick, That is indeed the set up I have. That is correct there isnt a issue with connection between the HF and Splunk Cloud but rather my results from the DBconnect app not sending t...
See more...
Hi @PickleRick, That is indeed the set up I have. That is correct there isnt a issue with connection between the HF and Splunk Cloud but rather my results from the DBconnect app not sending to Splunk Cloud. I am more so looking to see if anyone else has faced this issue before because I have checked several things and all looks well but no real solution to get the data transferred
Hi @Strangertinz , ok, let me know if I can help you further. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @gcusello I will check again. I also used batched results and still did not see any data. This is why I am not narrowing my focus on the rising column but I will evaluate further and ensure th...
See more...
Hi @gcusello I will check again. I also used batched results and still did not see any data. This is why I am not narrowing my focus on the rising column but I will evaluate further and ensure there are no errors with the rising column. Thanks!
@yuanliu Thank you for your help. I accepted your suggestion as solution with the following note: - sendemail didn't work because I wasn't an admin - Using alert worked just fine Can you clari...
See more...
@yuanliu Thank you for your help. I accepted your suggestion as solution with the following note: - sendemail didn't work because I wasn't an admin - Using alert worked just fine Can you clarify what you meant "join" will get me nowhere? The result using JOIN worked just fine My intention is to "join" the data, not to "append". When I used APPEND, the data was appended to the original data and I had to use "stats command" to merge the data. Thanks
So all 3 files are picked up by this one monitor stanza? Are the files all truly the same format aka "sourcetype"? Can you explain a bit more about why we omitting just one file? What can we us...
See more...
So all 3 files are picked up by this one monitor stanza? Are the files all truly the same format aka "sourcetype"? Can you explain a bit more about why we omitting just one file? What can we use to uniquely identify this particular source? the host? Sounds like it has to be source + something else to make it unique. If you cant differentiate them at the source, then perhaps something like ingest_eval or a "sourcetype rename" is needed. Seems to me you might just be overloading the config...I mean maybe just dont deploy an input that picks up this file in prod? thats why i asked if they truly are all the same sourcetype/format....
Hi matty, Thanks for your quick response. the lab and prod file paths are the same - yes , but sourcetype name is different for prod and stage i can't pass sourcetype in props because three log fi...
See more...
Hi matty, Thanks for your quick response. the lab and prod file paths are the same - yes , but sourcetype name is different for prod and stage i can't pass sourcetype in props because three log files are part of one sourcetype and among that i am restricting one log file - but i want all three logs in file stage. Also are you on-prem or cloud? - on-Prem What does your inputs.conf stanza look like? [monitor:<path>] sourcetype= <sourcetype name> Thanks