All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
Short answer: Yes. Longer answer: You can do it with CSS. <panel depends="$alwaysHide$"> <html> <style> #single text { fill: $colour$ !important; } ... See more...
Short answer: Yes. Longer answer: You can do it with CSS. <panel depends="$alwaysHide$"> <html> <style> #single text { fill: $colour$ !important; } </style> </html> </panel> <panel> <single id="single"> <search> <query>| makeresults | fields - _time | eval OnTarget=mvindex(split("Yes,No",","),random()%2) | eval _colour=if(OnTarget="Yes","Green","Red")</query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <set token="colour">$result._colour$</set> </done> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel>
| rex field=account_id "\b(0?)(?<field_to_look_up>\d+)\b" | lookup accounts.csv account_id AS field_to_look_up [...]  
At first glance it seems like a use case for federated search. Having said that - I've never used federated search myself and can't tell you what limitations it has.
What you have should work, but you could try this instead index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | stats count by user app date_month | chart count by app date_month
There is no single good answer to such question. It all depends on your data input. It is obviously an overkill to have a 10G uplink for a single or just a bunch of UFs on a fairly unused servers. Bu... See more...
There is no single good answer to such question. It all depends on your data input. It is obviously an overkill to have a 10G uplink for a single or just a bunch of UFs on a fairly unused servers. But on the other hand, if you have a site with plethora of fairly active nodes, even this 10G hose might not be sufficient (but then a single IF will most surely also not be enough).
Hi Team  Can you please help me to find a way to change the color of the output value in a single value visualization.  If COUNT_MSG is OK , then display OK in Green If COUNT_MSG is NOK , then d... See more...
Hi Team  Can you please help me to find a way to change the color of the output value in a single value visualization.  If COUNT_MSG is OK , then display OK in Green If COUNT_MSG is NOK , then display NOK in Red Current Code :  <panel> <title>SEMT FAILURES DASHBOARD</title> <single> <search> <query>(index="events_prod_gmh_gateway_esa") sourcetype="mq_PROD_GMH" Cr=S* (ID_FCT=SEMT_002 OR ID_FCT=SEMT_017 OR ID_FCT=SEMT_018 ) ID_FAMILLE!=T2S_ALLEGEMENT | eval ERROR_DESC= case(Cr == "S267", "T2S - Routing Code not related to the System Subscription." , Cr == "S254", "T2S - Transcodification of parties is incorrect." , Cr == "S255", "T2S - Transcodification of accounts are impossible.", Cr == "S288", "T2S - The Instructing party should be a payment bank.", Cr == "S299", "Structure du message incorrecte.",1=1,"NA") | stats count as COUNT_MSG | eval status = if(COUNT_MSG = 0 , "OK" , "NOK" ) | table status</query> <earliest>@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">all</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="useColors">1</option> </single> </panel>   Current Output:   
I still don't understand what you mean by those apps. App in Splunk terminology is just a collection of files. The same set of settings can usually be equally well provided by a single app as well as... See more...
I still don't understand what you mean by those apps. App in Splunk terminology is just a collection of files. The same set of settings can usually be equally well provided by a single app as well as multiple ones (with the possible difference of access to config elements provided by different apps if you differentiate permissions on a role/app basis). Performance improvement when doing summary indexing happens not just because you use the collect command but because summary indexing assumes just that - doing some _summary_ on your data before indexing it. So - for example - you calculate some aggregated sums for every 15 minutes and store that value using collect command in an index so that later you don't have to summarize your raw data each time but just use that already calculated sum. That's what summary indexing is. Simply copying events from index to index is not summary indexing.
Hi @bowesmana ,   Thanks for the response. I don't know JS, so I am checking with the communty if someone else had the same issues and might have some generic solution that might help me fix the xm... See more...
Hi @bowesmana ,   Thanks for the response. I don't know JS, so I am checking with the communty if someone else had the same issues and might have some generic solution that might help me fix the xml. I'll tried your solution and it worked. Could you please explain what it really does so that I would get idea ?   Thanks, Pravin
Hi @hazem , as I said, the answer is related to the bandwidth you have: There isn't a recommended value: the highest you can! As I said, between Intermediate UF and IDX, you must have a large bandw... See more...
Hi @hazem , as I said, the answer is related to the bandwidth you have: There isn't a recommended value: the highest you can! As I said, between Intermediate UF and IDX, you must have a large bandwidth to avoid queues.You can check queues with my search. Ciao. Giuseppe
thank you @gcusello  for your reply i need to clarify that we are in setting architecture phase and our customer  asked me the below questions and I need to reply with specific recommended B/W   W... See more...
thank you @gcusello  for your reply i need to clarify that we are in setting architecture phase and our customer  asked me the below questions and I need to reply with specific recommended B/W   What is the recommended bandwidth between intermediate forwarders, heavy-weight forwarders, and indexers? What is the recommended bandwidth between the UF agents and indexers?"
Hi @hazem , it depends on how many logs you have to transmit: e.g. a Domain Controller has to transmit more logs than a server, if you have application logs they must consider them. Anyway, between... See more...
Hi @hazem , it depends on how many logs you have to transmit: e.g. a Domain Controller has to transmit more logs than a server, if you have application logs they must consider them. Anyway, between intermediate UFs and Indexer, I hint to avoid limits. You can configure the max throughtput  suing the maxKBps parameter on the UFs. My hint is to leave the default values, changing only maxKBps for intermediate UFs, and analyzing both if you have netweork congestions and your Indexers can index all logs with an acceptable delay. Another analysis to perform is the presence of queues, using this search: index=_internal source=*metrics.log sourcetype=splunkd group=queue | eval name=case(name=="aggqueue","2 - Aggregation Queue", name=="indexqueue", "4 - Indexing Queue", name=="parsingqueue", "1 - Parsing Queue", name=="typingqueue", "3 - Typing Queue", name=="splunktcpin", "0 - TCP In Queue", name=="tcpin_cooked_pqueue", "0 - TCP In Queue") | eval max=if(isnotnull(max_size_kb),max_size_kb,max_size) | eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size) | eval fill_perc=round((curr/max)*100,2) | bin _time span=1m | stats Median(fill_perc) AS "fill_percentage" perc90(fill_perc) AS "90_perc" max(max) AS max max(curr) AS curr by host, _time, name | where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue") | sort -_time if you have queues, you can modify the maxSize parameter for the queues and the maxKBps. Ciao. Giuseppe
Our deployment has indexers located in the main data center and multiple branches. We plan to deploy intermediate forwarders and Universal Forwarder (UF) agents in our remote branches to collect logs... See more...
Our deployment has indexers located in the main data center and multiple branches. We plan to deploy intermediate forwarders and Universal Forwarder (UF) agents in our remote branches to collect logs from security devices like firewalls and load balancers  .   What is the recommended bandwidth between the intermediate forwarders and indexers? What is the recommended bandwidth between the UF agents and indexers?"
Hi ,   I have a requirement to create a software stack dashboard in Splunk , which shows all the details which are seen in task manager like shown in below. We have both windows and unix addon  ins... See more...
Hi ,   I have a requirement to create a software stack dashboard in Splunk , which shows all the details which are seen in task manager like shown in below. We have both windows and unix addon  installed and getting the logs.   Can someone please help me in creating a dashboard which shows all the details in a dashboard.  
Thanks! Do you perhaps know how I can summarize count only a user once per month per app. index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | table user, app, date_mont... See more...
Thanks! Do you perhaps know how I can summarize count only a user once per month per app. index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | table user, app, date_month | dedup user,app, date_month | chart count by app date_month | sort app 0 This gives me a huge total for august but takes out the events for the other months
Hi,  We maintain a lookup table which contains a list of account_id and some other info as shown below. account_id account_owner type 12345 David prod 123456 John non-prod 45678 N... See more...
Hi,  We maintain a lookup table which contains a list of account_id and some other info as shown below. account_id account_owner type 12345 David prod 123456 John non-prod 45678 Nat non-prod In our query, We use a lookup command to match enrich the data using this lookup table. we match by account_id and get the corresponding owner and type as follows.   | lookup accounts.csv account_id OUTPUT account_owner type     In some events (depending on the source) , the account_id values contains a preceding 0 . But in our lookup table, the account_id column does not have a preceding 0.    Basically some events will have account_id = 12345  and some might have account_id=012345. They both are same accounts though.  Now, The lookup command displays the results when there is an exact exact matching account_id in events,   but fails when there is that extra 0 at the beginning. How to tune the lookup command to make it search the lookup table for both the conditions - with and without preceding 0 for the account_id field and even if one matches, it should produce the corresponding results ? Hope i am clear. I am unable to come with a regex for this.
Note that if you have any days where there are no results, you will not get a datapoint for that day for that source, so it will affect the average. You can probably resolve that if that's an issue.
Instead of using timechart, you can use stats and bin by _time, e.g. | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=-7d@d latest=now by _time span=1d, sou... See more...
Instead of using timechart, you can use stats and bin by _time, e.g. | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=-7d@d latest=now by _time span=1d, source | stats sum(log_count) as sum_log by _time source | eventstats avg(sum_log) as avg_sum_log by source and then in your trellis give yourself an independent scale You seem to need the tstats AND stats to give yourself a trellis by source option.
Hi @koshyk , ok, as I said, you have to create a cluster in each environment. Then Each Search Head Cluster, muste be configured as Search Head of each Indexer Cluster. For more details see at ht... See more...
Hi @koshyk , ok, as I said, you have to create a cluster in each environment. Then Each Search Head Cluster, muste be configured as Search Head of each Indexer Cluster. For more details see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Configuremulti-clustersearch let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated