All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@bowesmanait worked now I have to figure out how to utilize my drill down to isolate the columns and index heading with my query. Either way thank you.
The solution given does work and it is exactly what I am looking for. The value{0} does correspond to dsnames{0}, and value{1} to dsnames{1}. I am unfortunately not able to change the logs.   The on... See more...
The solution given does work and it is exactly what I am looking for. The value{0} does correspond to dsnames{0}, and value{1} to dsnames{1}. I am unfortunately not able to change the logs.   The only problem I have still is that the max for read and write is displaying the same number and I am almost certain they should be different numbers.   This is the current search   index="collectd_test" plugin=disk type=disk_octets plugin_instance=dm-0 | eval data = mvappend(json_object("dsname", mvindex('dsnames{}', 0), "value", mvindex('values{}', 0)), json_object("dsname", mvindex('dsnames{}', 1), "value", mvindex('values{}', 1))) | mvexpand data | spath input=data | stats min(value) as min max(value) as max avg(value) as avg by dsname | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)   This is the raw text of the JSON event.   {"values":[0,35225.165651947],"dstypes":["derive","derive"],"dsnames":["read","write"],"time":1700320094.109,"interval":10.000,"host":"usorla7sw101x.ad101.siemens-energy.net","plugin":"disk","plugin_instance":"dm-0","type":"disk_octets","type_instance":"","meta":{"network:received":true,"network:ip_address":"129.73.170.204"}}   This is the current output. dsname min           max                           avg read 0.00 192626230.85 53306.64 write 0.00 192626230.85 65185.22
That works perfectly.  Thank you PickleRick
+1 on @richgalloway 's doubt You are supposed to download the archive and unpack it to $SPLUNK_HOME/etc/apps Restart Splunk That's it. No ingesting anything, no defining inputs, no nothing. The fi... See more...
+1 on @richgalloway 's doubt You are supposed to download the archive and unpack it to $SPLUNK_HOME/etc/apps Restart Splunk That's it. No ingesting anything, no defining inputs, no nothing. The files will _not_ be moved anywhere - the app contains pre-indexed buckets along with the indexes.conf file pointing to this particular directory so that Splunk knows where to find the data. So after the restart Splunk should notice that it has new index(es?) with data files placed in your app's directory (that's kinda unusual and you'd normally not do that for normally ingested index data but that's a dataset prepared to be easily distributed). And that's all there is to it. You should _not_ be ingesting it in any way which you somehow did since you're showing us the contents of the files pulled into some index.
It seems you copy-pasted your events from the Splunk GUI and they got a bit mangled so next time try to paste raw events, not the text from the search results and put it in code block (the </> button... See more...
It seems you copy-pasted your events from the Splunk GUI and they got a bit mangled so next time try to paste raw events, not the text from the search results and put it in code block (the </> button at the top of the editor on this page) or in the preformatted text style. Anyway. Since you can't reliably say beforehand what your Country field will look like (it can be just one word, it can contain spaces, maybe dashes) - you need to anchor it by providing a text which will always be _after_ that string so that the regex can know when to stop. Like and\scountry\s(?<Country>.*)\sat: This way everything that come after "and country "  and before " at:" will get extracted as your Country field. You could fiddle with greediness of the match if you can have another "at:" later on in the event.
UF does not have web interface. Every other component, based on the full Splunk Enterprise package can have its own webui. But whether this webui will be on depends on your particular needs and arch... See more...
UF does not have web interface. Every other component, based on the full Splunk Enterprise package can have its own webui. But whether this webui will be on depends on your particular needs and architecture. Indexers are typically run headless (without web interface). Search-heads are typically run with web interface (there are very rare cases when the search-head can be run headless but these are really border cases). Deployment Server can be run with or without webui depending on how you're going to manage it. HF can be run with or without webui (the core functionality - receiving and forwarding data - can be run without webui perfectly well but some apps need webui to initially configure them). So it's a bit complicated
Hi all, I'm having difficulty crafting regex that will extract a field that can have either 1 or multiple words. Using the "add field" in Splunk Enterprise doesn't seem to be able to get the job do... See more...
Hi all, I'm having difficulty crafting regex that will extract a field that can have either 1 or multiple words. Using the "add field" in Splunk Enterprise doesn't seem to be able to get the job done either.  The field I would like to extract is for the "Country" which can be 1 word or multiple words. Any help would be appreciated. Below is my regex and a sample of the logs from which I am trying to extract fields. I don't consider myself to be a regex guru so don't laugh at my field extraction regex. It works on everything except The country. User\snamed\s(\w+\s\w+)\sfrom\s(\w+)\sdepartment\saccessed\sthe\sresource\s(\w+\.\w{3})(\/\w+\.*\/*\w+\.*\w{0,4})\sfrom\sthe\ssource\sIP\s(\d+\.\d+\.\d+\.\d+)\sand\scountry\s\W(\w+\s*)   11/17/23 2:25:22.000 PM [Network-log]: User named Linda White from IT department accessed the resource Cybertees.THM/signup.html from the source IP 10.0.0.2 and country France at: Fri Nov 17 14:25:22 2023 host = ***** source = networks sourcetype = network_logs [Network-log]: User named Robert Wilson from HR department accessed the resource Cybertees.THM/signup.html from the source IP 10.0.0.1 and country United States at: Fri Nov 17 14:25:11 2023 host = ***** source = networks sourcetype = network_logs 11/17/23 2:25:21.000 PM [Network-log]: User named Christopher Turner from HR department accessed the resource Cybertees.THM/products/product2.html from the source IP 192.168.0.100 and country Germany at: Fri Nov 17 14:25:17 2023 host = ***** source = networks sourcetype = network_logs
Something's not right in that screenshot.  The contents of indexes.conf should not be indexed.  I suspect some instructions are being misinterpreted. Please tell us more details about how you are tr... See more...
Something's not right in that screenshot.  The contents of indexes.conf should not be indexed.  I suspect some instructions are being misinterpreted. Please tell us more details about how you are trying to load the data.  Provide the exact steps followed or a link to them.
For a 2-D plot, you'll have to somehow reduce the number of arguments to 3.  Perhaps you can concatenate two of the values (| eval foo=bar . baz) and graph the remaining three.
Hi @blkscorpio , I think that you should read something about Splunk architecture. Anyway, Spunk UF and HF are different packages: UF is a thin agent to install to ingest logs, instead HF is a Full... See more...
Hi @blkscorpio , I think that you should read something about Splunk architecture. Anyway, Spunk UF and HF are different packages: UF is a thin agent to install to ingest logs, instead HF is a Full Splunk instance (with a different package) where are used only the ingestion and forwardring features. The HF has also all the UF's features and many other things. UF hasn't a web interface, only HF has a web interface. Start from Fundamentals I training to have the first introduction to Splunk. Ciao. Giuseppe  
Sorry I am just starting to learn Splunk and I am a bit confused.. So my question is does Universal Forwarder and Heavy Forwarder has  separate "Instances" on the Splunk web? I have 
Hello, two dimensions are enough. I only need the PartnerID as a value at the measuring point to branch from the display to another detail dashboard. Are there any ideas?   Regards Michael 
OK. Splunk terminology can be a bit confusing for newcomers So let's straighten that a bit. There are two separate installation packages that you can download: - Universal Forwarder - "full" S... See more...
OK. Splunk terminology can be a bit confusing for newcomers So let's straighten that a bit. There are two separate installation packages that you can download: - Universal Forwarder - "full" Splunk Enterprise installer. So Universal Forwarder is a relatively small instance with limited functionality meant only to be used for getting data using some subset of input types and forward the data downstream. It cannot do local indexing or searching, it doesn't have some other functionalities (like syslog forwarding). Everything else you install from the Splunk Enterprise installer and you end up with a Splunk Enterprise server which can have one or more roles: - indexer - search head - cluster manager - license manager - search head cluster deployer - deployment server - heavy forwarder (you can also install an "all-in-one" instance performing both indexer and search head duties). Heavy Forwarder is basically a Splunk Enterprise instance that does not perform local indexing - it doesn't store data it receives either from local inputs or from other forwarders locally but processes the events and forwards them to outputs (either another layer of forwarders or indexers). So "forwarder" as a general term is a component which gets the data somehow and forwards it - it can be an UF or HF depending on which software package is used to install the server.
First and foremost - often the key to speeding up the search is just writing a good search. Typically you don't just search for all events from a given index - you either look for something specific... See more...
First and foremost - often the key to speeding up the search is just writing a good search. Typically you don't just search for all events from a given index - you either look for something specific or transform the data to get some meaningful summary. Having said that - there are so many points where things can go slow (network links, performance and resources of single hosts, data distribution, the load on your environment) that it's impossible to give a "general" answer. So architecture is one thing (and you really should get your local friendly Splunk Partner involved to design an architecture fitting your specific needs - including resilience, HA, capacity and specific use cases) but troubleshooting existing environment is another. You can verify what your search is waiting for using the "Inspect Job" button.
Hi all I created an environment with following instances: cluster master three search heads four indexers heavy forwarder license server deployment server deployer We have more that 50 cli... See more...
Hi all I created an environment with following instances: cluster master three search heads four indexers heavy forwarder license server deployment server deployer We have more that 50 clients so that I deployed the deployment server on a dedicated server. We have some indexes but one of them (say index named A) have about 35K per minute events. The heavy forwarder load balances the events between four indexers. The replication factor is 4 and the search factor is 3. A simple search like 'index=A' can return about 17M events at about 5 minutes. I want to speed up the search on the index A. I can change whole deployment and environment if anyone has an idea about speeding up the search.I would be grateful If anyone could help me about parameters like replication factor or search factor, number of indexers and... to speed up the search. Thank you.
Hey! So Im using an EC2 splunk ami and have all the correct apps loaded but cannot for the life of me get the boss v1 data in my environment.  I've put it into $SPLUNK_HOME/etc/apps (as mentioned in... See more...
Hey! So Im using an EC2 splunk ami and have all the correct apps loaded but cannot for the life of me get the boss v1 data in my environment.  I've put it into $SPLUNK_HOME/etc/apps (as mentioned in github) and it did not work, it simply does not pick up that this is a data set and instead is comfortably in my apps.  Loading it in other ways means it doesnt come through correctly.  Is this a timestamp issue?   Any help would be so appreciated    
1. If I remembered correctly you said that you were extracting host from the whole "packed" event - before casting it to the destination sourcetype. It was just to demonstrate that it works and clone... See more...
1. If I remembered correctly you said that you were extracting host from the whole "packed" event - before casting it to the destination sourcetype. It was just to demonstrate that it works and cloned event inherits the indexed fields which had already been extracted before the event was cloned. That's all. If you're gonna extract the host from the "embedded" event after casting it to the destination sourcetype, no problem, you can remove that transform from the source sourcetype. The transforms extracting source and host were just inserted to demonstrate the inheritance and "overwriting" fields in case of event cloning. They are not essential for the cloning as such. 2. The conditional_host_overwrite transform was to show you that even though the host field gets overwritten in the source sourcetype the transform will get executed in the destination sourcetype so your cloning does indeed do what you wanted - fire transforms from the destination sourcetype to which the events get cloned. The main takeouts from this exercise are that: 1. Indexed fields get cloned with the CLONE_SOURCETYPE functionality 2. _ALL_ events to which the given props get applied (so all matching given sourcetype, source or host - depending on how you assign your transforms to events) get cloned so you have to selectively filter them in the source sourcetype and destination sourcetype. 3. Props/transforms from the destination sourcetype get applied after the event gets cloned 4. Order of transforms is important.
Hi @PickleRick , thank you very much for you answer, I will try it on Monday  and I'm confident that it will run. Only two explanations, to better understand you solution:  you hint to extract ho... See more...
Hi @PickleRick , thank you very much for you answer, I will try it on Monday  and I'm confident that it will run. Only two explanations, to better understand you solution:  you hint to extract host and source before cloning sourcetype, but how these two fields are passed to the cloned sourcetype? what's the purpose of the "TRANSFORMS-conditional_host_overwrite" command in the cloned sourcetype? must I add also a similar command for source field? Ciao and thank you very much. Giuseppe
Please use raw text to post sample JSON events, not screenshot and not Splunk's contracted pretty format. Do you mean the value{0} correspond to dsnames{0}, and value{1} to dsnames{1}? This is about... See more...
Please use raw text to post sample JSON events, not screenshot and not Splunk's contracted pretty format. Do you mean the value{0} correspond to dsnames{0}, and value{1} to dsnames{1}? This is about as wasteful as JSON data design goes.  If you have influence on developers who wrote these logs, implore them to change the structure to array of hashes instead of hash of arrays.  Like this:   {"whatever": [ {"dsname":"read", "dstype":"typefoo", "value": 123}, {"dsname":"write", "dstype":"typebar", "value": 456} ] }   Before that happens, you can contain the damage from your developers' crimes with some reconstruction.  Traditionally, this is done with string concatenation; and usually, you need mvmap to handle indeterminant number or large number of array elements. In this case, there are only two semantic values so I'll not be bothered with mvmap.  I will also use structured JSON instead of string concatenation. (JSON function was introduced in Splunk 8.0.)   | eval data = mvappend(json_object("dsname", mvindex('dsnames{}', 0), "value", mvindex('values{}', 0)), json_object("dsname", mvindex('dsnames{}', 1), "value", mvindex('values{}', 1))) | mvexpand data | spath input=data | stats min(value) as min max(value) as max avg(value) as avg by dsname | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)   Here is some mock data that I use to test the above _raw dsnames{} values{} {"dsnames": ["read", "write"], "values": [123, 234]} read write 123 234 {"dsnames": ["read", "write"], "values": [456, 567]} read write 456 567 This is an emulation to get the above   | makeresults | eval data=mvappend("{\"dsnames\": [\"read\", \"write\"], \"values\": [123, 234]}", "{\"dsnames\": [\"read\", \"write\"], \"values\": [456, 567]}") | mvexpand data | rename data as _raw | spath ``` data emulation above ```   You can play with this and compare with real data.  This mock data gives dsname min max avg read 123.00 456.00 289.50 write 234.00 567.00 400.50
Hi @Zodi_6 , see the transpose command at https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Transpose and, please, try: index=_internal source="*license_usage.log" | eval bytes=b |... See more...
Hi @Zodi_6 , see the transpose command at https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Transpose and, please, try: index=_internal source="*license_usage.log" | eval bytes=b | eval GB = round(bytes/1024/1024/1024,3) | timechart span=1d sum(GB) by h | transpose 0 column_name=h header_field_time Ciao. Giuseppe