All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That's a good direction! unfortunately still not working 100% , i used your code in my props.conf : [APIGW] SEDCMD-trim-file = s/(\\"file\\":\s*\\")([^\\"]{5000,}?)/\1long_file/g and here are the... See more...
That's a good direction! unfortunately still not working 100% , i used your code in my props.conf : [APIGW] SEDCMD-trim-file = s/(\\"file\\":\s*\\")([^\\"]{5000,}?)/\1long_file/g and here are the results: it's like it only replace the 5000 first character instead the entire filed but this is a big step in the right direction thank you for your help! i will try taking it from here but it will be mostly appreciated if you have the solution in you mind and can share it EDIT:  From a few tests I've made it stops the field change exactly after 5000 characters instead of running till the first comma / end of field.  EDIT2:  the regex that was needed was: SEDCMD-trim-file = s/(\\"file\\":\s*\\")([^\\"]{5000,})(\\")/\1long_file/g but thank you for all the help! EDIT3: Well, apparently this solution alone is not enough, I also had to increase the truncate value because when the secmd command run it replaces the string  at the end meaning it first recive the default 10,000 characters and only than replace which is not good enough because the final result is still truncated events, i needed to increase truncate value so it will recive the entire event and later on it's doing the replacement.
Hi There, before solving this i thought to double check this one thru this idea: 1) I thought to write a small python script which will fetch all links from the Splexicon page 2) Verify the http re... See more...
Hi There, before solving this i thought to double check this one thru this idea: 1) I thought to write a small python script which will fetch all links from the Splexicon page 2) Verify the http return code (404 or 401 or .. ), so we can make sure that no more broken links will be there.    As i was working on this, i noticed the Splunk version hardcoded on the docs links: 1) for example, take this one - https://docs.splunk.com/Splexicon:Eventdata this page got two more links under the "For more information" In Getting Data In: Overview of event processing Assign the correct source types to your data the clean links are: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/Overviewofeventprocessing https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/Setsourcetype when i was working on this last time, i remember that the links would have "latest" now it is hard-coded as "9.3.2"  so, if Splunk releases 9.3.3, then, may i know if some Splunk Docs admin will manually edit/update these links, pls suggest, thanks. 
Hi @alec_stan , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @gcusello  Great thanks.
Hi @inessa40408 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hello. Thank you for your reply. You are right, I have given little information.   We have a Windows devices. Theses devices have a limited network map. It does not save log files regarding all c... See more...
Hello. Thank you for your reply. You are right, I have given little information.   We have a Windows devices. Theses devices have a limited network map. It does not save log files regarding all connections to the WiFi network. The only way to get this information is to go to the CMD, run the command: netsh wlan show wlanreport and then this report will be saved in the folder: C:\ProgramData\Microsoft\Windows\WlanReport\wlan-report-latest.html.   But, this script saves the report only after manual entry on the device. We need this report to be saved constantly.   For example, with a frequency of: once an hour. So that later this file could be loaded into splunk to analyze the operation and connection to the WiFi network. Yes, of course, we would like to have more information from this device, such as: signal strength of the equipment, connection breaks, ping failed, MAC addresses of access points to which the device connects. But for now this is not a priority, as I would like to automate saving the LOG file to a specific folder.   I will be so thankfull if you have any ideas or advice on this matter, I would be grateful for the advice   If you have any clarifying questions: do not hesitate to ask me.   Thanks in advance for your answer
Hello.   Thank you for your reply.   You are right, I have given little information.   We have a Windows devices. Theses devices have a limited network map. It does not save log files regarding... See more...
Hello.   Thank you for your reply.   You are right, I have given little information.   We have a Windows devices. Theses devices have a limited network map. It does not save log files regarding all connections to the WiFi network. The only way to get this information is to go to the CMD, run the command: netsh wlan show wlanreport and then this report will be saved in the folder: C:\ProgramData\Microsoft\Windows\WlanReport\wlan-report-latest.html. But, this script saves the report only after manual entry on the device. We need this report to be saved constantly.   For example, with a frequency of: once an hour. So that later this file could be loaded into splunk to analyze the operation and connection to the WiFi network. Yes, of course, we would like to have more information from this device, such as: signal strength of the equipment, connection breaks, ping failed, MAC addresses of access points to which the device connects. But for now this is not a priority, as I would like to automate saving the LOG file to a specific folder.   I will be so thankfull if you have any ideas or advice on this matter, I would be grateful for the advice.   If you have any clarifying questions: do not hesitate to ask me.   Thanks in advance for your answer
Hi @alec_stan , Splunk hasn't any kind of clustering at Forwarders level, you have to configure your DS to deploy the same configurations to all the HFs.  
Hi @gcusello  Thank you for quick response. That means we do not need to do any form of clustering. On our current setup, we have two Intermediate Forwarders and they do not store any copy of the d... See more...
Hi @gcusello  Thank you for quick response. That means we do not need to do any form of clustering. On our current setup, we have two Intermediate Forwarders and they do not store any copy of the data and no clustering. From what you are saying, we should deploy two new forwarders on the other site, configure all intermediate forwarders to now point to four intermediate forwarders (two on DC1, two on DC2). Thanks again.
Hi @alec_stan , it's surely useful to have at least one or two HFs in the secondary sites to have HA on all the layers of your infrastructure; the number depends on the traffic that they have to man... See more...
Hi @alec_stan , it's surely useful to have at least one or two HFs in the secondary sites to have HA on all the layers of your infrastructure; the number depends on the traffic that they have to manage. About DS, you can continue to have only one DS, it isn't mandatory to have a redundant infrastructure for this role, because, in case of fault of the primary site, the only limitation is that you cannot update your Forwarders for limited time. The opportunity of having a second DS is related to the number of Forwarders to manage or if you have a segregated network, it isn't related to HA. About the configuration of the Forwarders layer, you have to configure all of them to send their logs to all the HFs in auto load balancing mode and then Splunk will manager the data distribution and fail over. Ciao. Giuseppe
Good day Splunkers, We have two site/DCs, where one is production and the other a standby DR. In our current architecture, we  have intermediate forwarders that forwards the logs to Splunk Cloud. Al... See more...
Good day Splunkers, We have two site/DCs, where one is production and the other a standby DR. In our current architecture, we  have intermediate forwarders that forwards the logs to Splunk Cloud. All universal forwarders send metrics/logs to these intermediate forwarders. We also have a single deployment server. The architecture is as follows: UF -> IF -> SH (Splunk cloud) The intermediate forwarders are Heavy Forwarders, they do some indexing, and some data transformation such as anonymizing data. The search head is on the cloud. We have been asked to move from the current production-DR architectural setup to an multi-site (active-active) setup. The requirement is for both DCs to be active and servicing customers at the same time. What is your recommendation in terms of setting up the forwarding layer? Is it okay to provision two more intermediate forwarders on the other DC and have all universal forwarders send to all intermediate forwarders across the two DCs. Is there a best practice that you can point me towards. Furthermore, do we need more deployment servers. Extra Info: The network team is about to complete network migration to Cisco ACI.
Along with what @SanjayReddy shared, https://www.splunk.com/en_us/training/certification.html would also help to look over different certifications (individual links have respective prerequisites.
How to create custom heatmap to project the overall health of all the applications deployed by platform and region vice?   which metrics we can used to project the overall application in Splunk obs... See more...
How to create custom heatmap to project the overall health of all the applications deployed by platform and region vice?   which metrics we can used to project the overall application in Splunk observability cloud. in RUM, we have only country property .Using that we are able to split application by country & environment vice. need to split by platform & region vice.      
how to create chart for Alert/Detector status to showcase overall health of application?   1.how may alerts configured for each application? 2.staus of alerts by severity    what is the metrics ... See more...
how to create chart for Alert/Detector status to showcase overall health of application?   1.how may alerts configured for each application? 2.staus of alerts by severity    what is the metrics available to showcase the above usecase in overall health dashboard in splunk observability cloud 
I recommend you first check all the available metrics from this receiver and be sure to enable the ones you want. https://docs.splunk.com/observability/en/gdi/opentelemetry/components/mssql-server-r... See more...
I recommend you first check all the available metrics from this receiver and be sure to enable the ones you want. https://docs.splunk.com/observability/en/gdi/opentelemetry/components/mssql-server-receiver.html If you don't see the metrics you need available, you may need to write your own custom SQL to retrieve the metrics you need using another receiver called SQL Query Receiver. https://docs.splunk.com/observability/en/gdi/opentelemetry/components/sqlquery-receiver.html
Please review this documentation. It will guide you to creating a global data link. https://docs.splunk.com/observability/en/metrics-and-metadata/link-metadata-to-content.html Couple of notes to ... See more...
Please review this documentation. It will guide you to creating a global data link. https://docs.splunk.com/observability/en/metrics-and-metadata/link-metadata-to-content.html Couple of notes to point out: * data links are created on metadata. Things like host name, container id, etc.  * They appear on certain types of charts, but not all. Line charts are a good example of where to find them. * Click on any point in time on the chart and the data table will appear. The columns that contain metadata will have "..." appear in the values of that metadata where you can click to existing data links or you can start creating a new one by clicking "configure data links" * When creating a new data link, you'll probably want to use the option "Show on any value of <your metadata field>" so that the value you have selected in your chart filter can carry through.  
Thanks @tscroggins for your upvote and karma points, much appreciated! last few months i was busy, could not spend time for this one.  May I know what would be your suggestions about these points p... See more...
Thanks @tscroggins for your upvote and karma points, much appreciated! last few months i was busy, could not spend time for this one.  May I know what would be your suggestions about these points pls: 1) I have been thinking to create an app as your suggestion listed below. would you recommend an app or a custom command or simply all important languages unicodes lookup(tamil_unicode_block.csv) uploading to Splunk | makeresults | eval _raw="இடும்பைக்கு" | rex max_match=0 "(?<char>.)" | lookup tamil_unicode_block.csv char output general_category | eval length=mvcount(mvfilter(NOT match(general_category, "^M")))   2) i assume that if i encapsulate his below listed python script in that app should be the work-around for this issue in a language agnostic way(this app should work for Tamil or Hindi or Telegu, etc) 3) or any other suggestions pls, thanks.     the app idea (your script from previous reply): $SPLUNK_HOME/etc/apps/TA-ucd/bin/ucd_category_lookup.py (this file should be readable and executable by the Splunk user, i.e. have at least mode 0500) #!/usr/bin/env python import csv import unicodedata import sys def main(): if len(sys.argv) != 3: print("Usage: python category_lookup.py [char] [category]") sys.exit(1) charfield = sys.argv[1] categoryfield = sys.argv[2] infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) header = r.fieldnames w = csv.DictWriter(outfile, fieldnames=r.fieldnames) w.writeheader() for result in r: if result[charfield]: result[categoryfield] = unicodedata.category(result[charfield]) w.writerow(result) main()  $SPLUNK_HOME/etc/apps/TA-ucd/default/transforms.conf [ucd_category_lookup] external_cmd = ucd_category_lookup.py char category fields_list = char, category python.version = python3 $SPLUNK_HOME/etc/apps/TA-ucd/metadata/default.meta [] access = read : [ * ], write : [ admin, power ] export = system   With the app in place, we count 31 non-whitespace characters using the lookup: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | rex max_match=0 "(?<char>.)" | lookup ucd_category_lookup char output category | eval length=mvcount(mvfilter(NOT match(category, "^M")))   Since this doesn't depend on a language-specific lookup, it should work with text from the Kural or any other source with characters or glyphs represented by Unicode code points. We can add any logic we'd like to an external lookup script, including counting characters of specific categories directly: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | lookup ucd_count_chars_lookup _raw output count If you'd like to try this approach, I can help with the script, but you may enjoy exploring it yourself first. $SPLUNK_HOME/etc/apps/TA-ucd/bin/ucd_category_lookup.py (this file should be readable and executable by the Splunk user, i.e. have at least mode 0500) #!/usr/bin/env python import csv import unicodedata import sys def main(): if len(sys.argv) != 3: print("Usage: python category_lookup.py [char] [category]") sys.exit(1) charfield = sys.argv[1] categoryfield = sys.argv[2] infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) header = r.fieldnames w = csv.DictWriter(outfile, fieldnames=r.fieldnames) w.writeheader() for result in r: if result[charfield]: result[categoryfield] = unicodedata.category(result[charfield]) w.writerow(result) main()  $SPLUNK_HOME/etc/apps/TA-ucd/default/transforms.conf [ucd_category_lookup] external_cmd = ucd_category_lookup.py char category fields_list = char, category python.version = python3 $SPLUNK_HOME/etc/apps/TA-ucd/metadata/default.meta [] access = read : [ * ], write : [ admin, power ] export = system   With the app in place, we count 31 non-whitespace characters using the lookup: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | rex max_match=0 "(?<char>.)" | lookup ucd_category_lookup char output category | eval length=mvcount(mvfilter(NOT match(category, "^M")))   Since this doesn't depend on a language-specific lookup, it should work with text from the Kural or any other source with characters or glyphs represented by Unicode code points. We can add any logic we'd like to an external lookup script, including counting characters of specific categories directly: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | lookup ucd_count_chars_lookup _raw output count  
This was a fun thread! I upvoted https://ideas.splunk.com/ideas/EID-I-2176.
Hi @zerocoolspain, I would use separate but similar radio inputs. Each radio input has its own set of tokens; however, updating a radio input also updates the global trobots token. The currently sel... See more...
Hi @zerocoolspain, I would use separate but similar radio inputs. Each radio input has its own set of tokens; however, updating a radio input also updates the global trobots token. The currently selected trobots1 and trobots2 values are preserved across changes to the tintervalo token. <form version="1.1" theme="light"> <label>intervalo</label> <init> <unset token="trobots"></unset> </init> <fieldset> <input type="dropdown" token="tintervalo" searchWhenChanged="true"> <label>Intervalo</label> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimasemana">Última semana completa</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimomes">Último mes completo</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimotrimestre">Último trimestre completo</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoultimoaño">Último año completo</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusomescurso">Mes en curso</choice> <choice value="|loadjob savedsearch=&quot;q71139x:vap:precalculoVAPusoañoencurso">Año en curso</choice> <choice value="7">Otros</choice> <change> <condition match="'tintervalo'==7"> <set token="show_trobots1">true</set> <unset token="show_trobots2"></unset> <set token="trobots">$trobots1$</set> </condition> <condition match="'tintervalo'!=7"> <unset token="show_trobots1"></unset> <set token="show_trobots2"></set> <set token="trobots">$trobots2$</set> </condition> </change> </input> <input type="radio" token="trobots1" depends="$show_trobots1$" id="inputRadioRI1" searchWhenChanged="true"> <label>Robots</label> <choice value="| eval delete=delete">Yes</choice> <choice value="`filter_robots` `filter_robots_ip`">No</choice> <initialValue>`filter_robots` `filter_robots_ip`</initialValue> <change> <set token="trobots">$trobots1$</set> </change> </input> <input type="radio" token="trobots2" depends="$show_trobots2$" id="inputRadioRI2" searchWhenChanged="true"> <label>Robots</label> <choice value="conBots">Yes</choice> <choice value="sinBots">No</choice> <initialValue>sinBots</initialValue> <change> <set token="trobots">$trobots2$</set> </change> </input> </fieldset> <row> <html> <table> <tr> <td><b>tintervalo:</b></td><td>$tintervalo$</td> </tr> <tr> <td><b>trobots1:</b></td><td>$trobots1$</td> </tr> <tr> <td><b>trobots2:</b></td><td>$trobots2$</td> </tr> <tr> <td><b>trobots:</b></td><td>$trobots$</td> </tr> </table> </html> </row> </form>  
Hi @tscroggins and all, Could you pls check this: the file http_error_code.csv StatusCode,Meaning 100,Continue 101,Switching protocols 403,Forbidden 404,Not Found the file http_error_co... See more...
Hi @tscroggins and all, Could you pls check this: the file http_error_code.csv StatusCode,Meaning 100,Continue 101,Switching protocols 403,Forbidden 404,Not Found the file http_error_codes_400.csv StatusCode,Meaning 400,Bad Request 401,Unauthorized 402,Payment Required 403,Forbidden 404,Not Found