All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello.   Thank you for your reply.   You are right, I have given little information.   We have a Windows devices. Theses devices have a limited network map. It does not save log files regarding... See more...
Hello.   Thank you for your reply.   You are right, I have given little information.   We have a Windows devices. Theses devices have a limited network map. It does not save log files regarding all connections to the WiFi network. The only way to get this information is to go to the CMD, run the command: netsh wlan show wlanreport and then this report will be saved in the folder: C:\ProgramData\Microsoft\Windows\WlanReport\wlan-report-latest.html. But, this script saves the report only after manual entry on the device. We need this report to be saved constantly.   For example, with a frequency of: once an hour. So that later this file could be loaded into splunk to analyze the operation and connection to the WiFi network. Yes, of course, we would like to have more information from this device, such as: signal strength of the equipment, connection breaks, ping failed, MAC addresses of access points to which the device connects. But for now this is not a priority, as I would like to automate saving the LOG file to a specific folder.   I will be so thankfull if you have any ideas or advice on this matter, I would be grateful for the advice.   If you have any clarifying questions: do not hesitate to ask me.   Thanks in advance for your answer
Hi @alec_stan , Splunk hasn't any kind of clustering at Forwarders level, you have to configure your DS to deploy the same configurations to all the HFs.  
Hi @gcusello  Thank you for quick response. That means we do not need to do any form of clustering. On our current setup, we have two Intermediate Forwarders and they do not store any copy of the d... See more...
Hi @gcusello  Thank you for quick response. That means we do not need to do any form of clustering. On our current setup, we have two Intermediate Forwarders and they do not store any copy of the data and no clustering. From what you are saying, we should deploy two new forwarders on the other site, configure all intermediate forwarders to now point to four intermediate forwarders (two on DC1, two on DC2). Thanks again.
Hi @alec_stan , it's surely useful to have at least one or two HFs in the secondary sites to have HA on all the layers of your infrastructure; the number depends on the traffic that they have to man... See more...
Hi @alec_stan , it's surely useful to have at least one or two HFs in the secondary sites to have HA on all the layers of your infrastructure; the number depends on the traffic that they have to manage. About DS, you can continue to have only one DS, it isn't mandatory to have a redundant infrastructure for this role, because, in case of fault of the primary site, the only limitation is that you cannot update your Forwarders for limited time. The opportunity of having a second DS is related to the number of Forwarders to manage or if you have a segregated network, it isn't related to HA. About the configuration of the Forwarders layer, you have to configure all of them to send their logs to all the HFs in auto load balancing mode and then Splunk will manager the data distribution and fail over. Ciao. Giuseppe
Good day Splunkers, We have two site/DCs, where one is production and the other a standby DR. In our current architecture, we  have intermediate forwarders that forwards the logs to Splunk Cloud. Al... See more...
Good day Splunkers, We have two site/DCs, where one is production and the other a standby DR. In our current architecture, we  have intermediate forwarders that forwards the logs to Splunk Cloud. All universal forwarders send metrics/logs to these intermediate forwarders. We also have a single deployment server. The architecture is as follows: UF -> IF -> SH (Splunk cloud) The intermediate forwarders are Heavy Forwarders, they do some indexing, and some data transformation such as anonymizing data. The search head is on the cloud. We have been asked to move from the current production-DR architectural setup to an multi-site (active-active) setup. The requirement is for both DCs to be active and servicing customers at the same time. What is your recommendation in terms of setting up the forwarding layer? Is it okay to provision two more intermediate forwarders on the other DC and have all universal forwarders send to all intermediate forwarders across the two DCs. Is there a best practice that you can point me towards. Furthermore, do we need more deployment servers. Extra Info: The network team is about to complete network migration to Cisco ACI.
Along with what @SanjayReddy shared, https://www.splunk.com/en_us/training/certification.html would also help to look over different certifications (individual links have respective prerequisites.
How to create custom heatmap to project the overall health of all the applications deployed by platform and region vice?   which metrics we can used to project the overall application in Splunk obs... See more...
How to create custom heatmap to project the overall health of all the applications deployed by platform and region vice?   which metrics we can used to project the overall application in Splunk observability cloud. in RUM, we have only country property .Using that we are able to split application by country & environment vice. need to split by platform & region vice.      
how to create chart for Alert/Detector status to showcase overall health of application?   1.how may alerts configured for each application? 2.staus of alerts by severity    what is the metrics ... See more...
how to create chart for Alert/Detector status to showcase overall health of application?   1.how may alerts configured for each application? 2.staus of alerts by severity    what is the metrics available to showcase the above usecase in overall health dashboard in splunk observability cloud 
I recommend you first check all the available metrics from this receiver and be sure to enable the ones you want. https://docs.splunk.com/observability/en/gdi/opentelemetry/components/mssql-server-r... See more...
I recommend you first check all the available metrics from this receiver and be sure to enable the ones you want. https://docs.splunk.com/observability/en/gdi/opentelemetry/components/mssql-server-receiver.html If you don't see the metrics you need available, you may need to write your own custom SQL to retrieve the metrics you need using another receiver called SQL Query Receiver. https://docs.splunk.com/observability/en/gdi/opentelemetry/components/sqlquery-receiver.html
Please review this documentation. It will guide you to creating a global data link. https://docs.splunk.com/observability/en/metrics-and-metadata/link-metadata-to-content.html Couple of notes to ... See more...
Please review this documentation. It will guide you to creating a global data link. https://docs.splunk.com/observability/en/metrics-and-metadata/link-metadata-to-content.html Couple of notes to point out: * data links are created on metadata. Things like host name, container id, etc.  * They appear on certain types of charts, but not all. Line charts are a good example of where to find them. * Click on any point in time on the chart and the data table will appear. The columns that contain metadata will have "..." appear in the values of that metadata where you can click to existing data links or you can start creating a new one by clicking "configure data links" * When creating a new data link, you'll probably want to use the option "Show on any value of <your metadata field>" so that the value you have selected in your chart filter can carry through.  
Thanks @tscroggins for your upvote and karma points, much appreciated! last few months i was busy, could not spend time for this one.  May I know what would be your suggestions about these points p... See more...
Thanks @tscroggins for your upvote and karma points, much appreciated! last few months i was busy, could not spend time for this one.  May I know what would be your suggestions about these points pls: 1) I have been thinking to create an app as your suggestion listed below. would you recommend an app or a custom command or simply all important languages unicodes lookup(tamil_unicode_block.csv) uploading to Splunk | makeresults | eval _raw="இடும்பைக்கு" | rex max_match=0 "(?<char>.)" | lookup tamil_unicode_block.csv char output general_category | eval length=mvcount(mvfilter(NOT match(general_category, "^M")))   2) i assume that if i encapsulate his below listed python script in that app should be the work-around for this issue in a language agnostic way(this app should work for Tamil or Hindi or Telegu, etc) 3) or any other suggestions pls, thanks.     the app idea (your script from previous reply): $SPLUNK_HOME/etc/apps/TA-ucd/bin/ucd_category_lookup.py (this file should be readable and executable by the Splunk user, i.e. have at least mode 0500) #!/usr/bin/env python import csv import unicodedata import sys def main(): if len(sys.argv) != 3: print("Usage: python category_lookup.py [char] [category]") sys.exit(1) charfield = sys.argv[1] categoryfield = sys.argv[2] infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) header = r.fieldnames w = csv.DictWriter(outfile, fieldnames=r.fieldnames) w.writeheader() for result in r: if result[charfield]: result[categoryfield] = unicodedata.category(result[charfield]) w.writerow(result) main()  $SPLUNK_HOME/etc/apps/TA-ucd/default/transforms.conf [ucd_category_lookup] external_cmd = ucd_category_lookup.py char category fields_list = char, category python.version = python3 $SPLUNK_HOME/etc/apps/TA-ucd/metadata/default.meta [] access = read : [ * ], write : [ admin, power ] export = system   With the app in place, we count 31 non-whitespace characters using the lookup: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | rex max_match=0 "(?<char>.)" | lookup ucd_category_lookup char output category | eval length=mvcount(mvfilter(NOT match(category, "^M")))   Since this doesn't depend on a language-specific lookup, it should work with text from the Kural or any other source with characters or glyphs represented by Unicode code points. We can add any logic we'd like to an external lookup script, including counting characters of specific categories directly: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | lookup ucd_count_chars_lookup _raw output count If you'd like to try this approach, I can help with the script, but you may enjoy exploring it yourself first. $SPLUNK_HOME/etc/apps/TA-ucd/bin/ucd_category_lookup.py (this file should be readable and executable by the Splunk user, i.e. have at least mode 0500) #!/usr/bin/env python import csv import unicodedata import sys def main(): if len(sys.argv) != 3: print("Usage: python category_lookup.py [char] [category]") sys.exit(1) charfield = sys.argv[1] categoryfield = sys.argv[2] infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) header = r.fieldnames w = csv.DictWriter(outfile, fieldnames=r.fieldnames) w.writeheader() for result in r: if result[charfield]: result[categoryfield] = unicodedata.category(result[charfield]) w.writerow(result) main()  $SPLUNK_HOME/etc/apps/TA-ucd/default/transforms.conf [ucd_category_lookup] external_cmd = ucd_category_lookup.py char category fields_list = char, category python.version = python3 $SPLUNK_HOME/etc/apps/TA-ucd/metadata/default.meta [] access = read : [ * ], write : [ admin, power ] export = system   With the app in place, we count 31 non-whitespace characters using the lookup: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | rex max_match=0 "(?<char>.)" | lookup ucd_category_lookup char output category | eval length=mvcount(mvfilter(NOT match(category, "^M")))   Since this doesn't depend on a language-specific lookup, it should work with text from the Kural or any other source with characters or glyphs represented by Unicode code points. We can add any logic we'd like to an external lookup script, including counting characters of specific categories directly: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | lookup ucd_count_chars_lookup _raw output count  
This was a fun thread! I upvoted https://ideas.splunk.com/ideas/EID-I-2176.
Hi @zerocoolspain, I would use separate but similar radio inputs. Each radio input has its own set of tokens; however, updating a radio input also updates the global trobots token. The currently sel... See more...
Hi @zerocoolspain, I would use separate but similar radio inputs. Each radio input has its own set of tokens; however, updating a radio input also updates the global trobots token. The currently selected trobots1 and trobots2 values are preserved across changes to the tintervalo token. <form version="1.1" theme="light"> <label>intervalo</label> <init> <unset token="trobots"></unset> </init> <fieldset> <input type="dropdown" token="tintervalo" searchWhenChanged="true"> <label>Intervalo</label> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimasemana">Última semana completa</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimomes">Último mes completo</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimotrimestre">Último trimestre completo</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimoaño">Último año completo</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusomescurso">Mes en curso</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoañoencurso">Año en curso</choice> <choice value="7">Otros</choice> <change> <condition match="'tintervalo'==7"> <set token="show_trobots1">true</set> <unset token="show_trobots2"></unset> <set token="trobots">$trobots1$</set> </condition> <condition match="'tintervalo'!=7"> <unset token="show_trobots1"></unset> <set token="show_trobots2"></set> <set token="trobots">$trobots2$</set> </condition> </change> </input> <input type="radio" token="trobots1" depends="$show_trobots1$" id="inputRadioRI1" searchWhenChanged="true"> <label>Robots</label> <choice value="| eval delete=delete">Yes</choice> <choice value="`filter_robots` `filter_robots_ip`">No</choice> <initialValue>`filter_robots` `filter_robots_ip`</initialValue> <change> <set token="trobots">$trobots1$</set> </change> </input> <input type="radio" token="trobots2" depends="$show_trobots2$" id="inputRadioRI2" searchWhenChanged="true"> <label>Robots</label> <choice value="conBots">Yes</choice> <choice value="sinBots">No</choice> <initialValue>sinBots</initialValue> <change> <set token="trobots">$trobots2$</set> </change> </input> </fieldset> <row> <html> <table> <tr> <td><b>tintervalo:</b></td><td>$tintervalo$</td> </tr> <tr> <td><b>trobots1:</b></td><td>$trobots1$</td> </tr> <tr> <td><b>trobots2:</b></td><td>$trobots2$</td> </tr> <tr> <td><b>trobots:</b></td><td>$trobots$</td> </tr> </table> </html> </row> </form>  
Hi @tscroggins and all, Could you pls check this: the file http_error_code.csv StatusCode,Meaning 100,Continue 101,Switching protocols 403,Forbidden 404,Not Found the file http_error_co... See more...
Hi @tscroggins and all, Could you pls check this: the file http_error_code.csv StatusCode,Meaning 100,Continue 101,Switching protocols 403,Forbidden 404,Not Found the file http_error_codes_400.csv StatusCode,Meaning 400,Bad Request 401,Unauthorized 402,Payment Required 403,Forbidden 404,Not Found  
Hi There, hope u r doing good, thanks for reading.  1) A fresh install of Splunk Enterprise 9.3.2 showing this security warning: Security risk warning: Found an empty value for 'allowedDomainList'... See more...
Hi There, hope u r doing good, thanks for reading.  1) A fresh install of Splunk Enterprise 9.3.2 showing this security warning: Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email alerts with search results to any domain. You can add values for 'allowedDomainList' either in the alert_actions.conf file or in Server Settings > Email Settings > Email Domains in Splunk Web.12/2/2024, 5:40:52 AM 2)  I have noticed this error around 2 or 3 months ago, but as its a simple and low priority / functionality related one, i ignored it. 3) last week as we Splunkers were discussing in our usergroup meeting about this, one of my friend asked - ok, this is a low priority issue for you, but for an organizations infosec perspective this could be a medium/big issue. 4) He suggested me that - the default config files should be configured to keep things in secured fashion(similar to that "zero-trust" security policy), giving a warning message isnt enough, right. i had to agree with him.  5) Screenshot attached for your note:
Yes  @PickleRick  the docs require bit more detailed info.  I see the docs are not updated yet(screenshot attached) (even after my idea request https://ideas.splunk.com/ideas/EID-I-2176) and my bug ... See more...
Yes  @PickleRick  the docs require bit more detailed info.  I see the docs are not updated yet(screenshot attached) (even after my idea request https://ideas.splunk.com/ideas/EID-I-2176) and my bug report to Splunk (i spent few hrs on multiple conference calls with Splunk Support, but no fruitful results) (New readers, could you pls spend a min and upvote that idea 2176, so at least i can tell my friends that i have found a bug on Splunk and suggested an idea of worth 100 upvotes   ) okies, sure, agreed that its not a big show stopper for Splunk.  ----- i have submitted the docs feedback just now.  ----- next steps - around 3 or 4 months i worked on an app creations (following the footsteps of @tscroggins 's superb suggestions), but i got stuck at the app packaging areas.  ------ working on this "small task" again now, let me update you all the progress soon, thanks. 
@karthi2809 I tend to use a text box where I can insert a where clause, like this <row id="button_row"> <panel> <input id="events_where" type="text" token="where_clause" searchWhenChange... See more...
@karthi2809 I tend to use a text box where I can insert a where clause, like this <row id="button_row"> <panel> <input id="events_where" type="text" token="where_clause" searchWhenChanged="true"> <label>Event filter where clause</label> <default></default> </input> <event> <search> <query> index=_internal host=bla | where $where_clause$ </query> <earliest>$selection.earliest$</earliest> <latest>$selection.latest$</latest> </search> </event> </panel> </row> it gives you flexibility to construct whatever you want, so as long as you know how to write valid SPL queries, you can use whatever eval statements you like, e.g.   You can do it with a search clause, but I find more flexibility to use eval based filters. You can also make your text box nice and wide using the id="xxx" in the <input> and then add this css <row depends="$CSS$"> <panel> <html> <style> #events_where .splunk-textinput { width: 400px !important; } </style> </html> </panel> </row>
As an additional exercise, we can compare diff with combinations of inputlookup. The following searches should return the same results: A | set diff [| inputlookup test.csv ] [| inputlookup test2.c... See more...
As an additional exercise, we can compare diff with combinations of inputlookup. The following searches should return the same results: A | set diff [| inputlookup test.csv ] [| inputlookup test2.csv ] B | inputlookup test.csv where NOT [| inputlookup test2.csv ] | inputlookup append=t test2.csv where NOT [| inputlookup test.csv ]
Hi @munang, The set command and the join command perform overlapping but different functions. set diff returns the symmetric difference of the subsearches: I.e. set diff returns all events in ... See more...
Hi @munang, The set command and the join command perform overlapping but different functions. set diff returns the symmetric difference of the subsearches: I.e. set diff returns all events in either subsearch A or subsearch B but not both: A url A_field https://www.splunk.com/ A_value https://www.appdynamics.com/ A_value   B url A_field https://www.appdynamics.com/ A_value https://www.cisco.com/ A_value   diff url A_field https://www.splunk.com/ A_value https://www.cisco.com/ A_value   Both join type=left and join type=outer perform a left outer join by joining all fields in all events in the base search with all fields from the first (default: max=1) matching event in the subsearch: I.e.: A url A_field https://www.splunk.com/ A_value https://www.appdynamics.com/ A_value   B url B_field https://www.appdynamics.com/ B_value1 https://www.appdynamics.com/ B_value2   join url A_field B_field https://www.splunk.com/ A_value (null) https://www.appdynamics.com/ A_value B_value1   As written, your join search is equivalent to join type=inner. The where command removes all events from the base search that were not joined to an event in the subsearch. To return the difference using the join command, the command would need to support a full outer join, and it does not.
Try something like this (in SimpleXML of course!) <row id="banner"> <panel> <html> <style> div[id="banner"].dashboard-row { top: 0; position: st... See more...
Try something like this (in SimpleXML of course!) <row id="banner"> <panel> <html> <style> div[id="banner"].dashboard-row { top: 0; position: sticky; z-index: 9999; } div[id="banner"] div { background-color: yellow; } </style> <h1>Your banner panel</h1> </html> </panel> </row>