All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

No, I have not used LINE_BREAKING option.  Do I need to create a props.conf under splunk_home$/etc/apps/local/  and mention these 2 lines ?i.e [sourcetype] and LINE_BREAKING =  :::::::::::::::::::
Hi @Sailesh6891 , did you tried to use LINE_BREKING option in props.conf? [your-sourcetype] LINE_BREAKING = ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: Ciao. G... See more...
Hi @Sailesh6891 , did you tried to use LINE_BREKING option in props.conf? [your-sourcetype] LINE_BREAKING = ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: Ciao. Giuseppe
Hi,  I have a log file on the server which I ingested in splunk through input app where I defined the index , sourcetype and monitor statement in inputs.conf. Log file on the server looks like below... See more...
Hi,  I have a log file on the server which I ingested in splunk through input app where I defined the index , sourcetype and monitor statement in inputs.conf. Log file on the server looks like below: xyz asdfoasdf asfanfafd ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: sdfsdfja agf[oija[gfojerg fgoaierr apodsifa[soigaiga[oiga[dogj ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: sadfnasd;fiasfdoiasndf'i dfdf fd garehaehseht shse thse tjst ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: asdf;nafdsknasdf asdfknasdfln asdf;nasdkfnasf asogja'fja foj'apogj aogj agf   When I try searching the log file in splunk, Logs are visible howerver events are not breaking as I expect it to come. I want events to be separated as below   Event 1: xyz asdfoasdf asfanfafd ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: Event 2: sdfsdfja agf[oija[gfojerg fgoaierr apodsifa[soigaiga[oiga[dogj :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::   Event 3: sadfnasd;fiasfdoiasndf'i dfdf fd garehaehseht shse thse tjst ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: Event 4: asdf;nafdsknasdf asdfknasdfln asdf;nasdkfnasf asogja'fja foj'apogj aogj agf :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::  
Hi @anooshac, you have to coalesce the key fields and then correclate them using stats: if the fields to correlate are field1 and field2 and the fields to display are field3 and field4 from the typ... See more...
Hi @anooshac, you have to coalesce the key fields and then correclate them using stats: if the fields to correlate are field1 and field2 and the fields to display are field3 and field4 from the type1 and field5 from type2 index=your_index sourcetype=your_sourcetype type IN (type1, type2) | eval key=coalesce(field1,field2) | stats values(field3) AS field3 values(field4) AS field4 values(field5) AS field5 BY key Cao. Giuseppe  
Hi all, I have 2 events present in a source type, with different data. There is one field which has same data in both the events but the field names are different. Can anyone suggest a method other ... See more...
Hi all, I have 2 events present in a source type, with different data. There is one field which has same data in both the events but the field names are different. Can anyone suggest a method other than JOIN to combine 2 events? I tried combining the fields by coalesce command, once i combine them i was not able to see the combined fields. I want to combine the events and do some calculations.
The output 1 and 2 are the dynamic values which we get the values from the field "Field1".  I tried with your two queries but no luck. if i removed the condition(where) i can get the results. Seems l... See more...
The output 1 and 2 are the dynamic values which we get the values from the field "Field1".  I tried with your two queries but no luck. if i removed the condition(where) i can get the results. Seems like there is an issue with the condition (output1 and output2)
That's a good direction! unfortunately still not working 100% , i used your code in my props.conf : [APIGW] SEDCMD-trim-file = s/(\\"file\\":\s*\\")([^\\"]{5000,}?)/\1long_file/g and here are the... See more...
That's a good direction! unfortunately still not working 100% , i used your code in my props.conf : [APIGW] SEDCMD-trim-file = s/(\\"file\\":\s*\\")([^\\"]{5000,}?)/\1long_file/g and here are the results: it's like it only replace the 5000 first character instead the entire filed but this is a big step in the right direction thank you for your help! i will try taking it from here but it will be mostly appreciated if you have the solution in you mind and can share it EDIT:  From a few tests I've made it stops the field change exactly after 5000 characters instead of running till the first comma / end of field.  EDIT2:  the regex that was needed was: SEDCMD-trim-file = s/(\\"file\\":\s*\\")([^\\"]{5000,})(\\")/\1long_file/g but thank you for all the help! EDIT3: Well, apparently this solution alone is not enough, I also had to increase the truncate value because when the secmd command run it replaces the string  at the end meaning it first recive the default 10,000 characters and only than replace which is not good enough because the final result is still truncated events, i needed to increase truncate value so it will recive the entire event and later on it's doing the replacement.
Hi There, before solving this i thought to double check this one thru this idea: 1) I thought to write a small python script which will fetch all links from the Splexicon page 2) Verify the http re... See more...
Hi There, before solving this i thought to double check this one thru this idea: 1) I thought to write a small python script which will fetch all links from the Splexicon page 2) Verify the http return code (404 or 401 or .. ), so we can make sure that no more broken links will be there.    As i was working on this, i noticed the Splunk version hardcoded on the docs links: 1) for example, take this one - https://docs.splunk.com/Splexicon:Eventdata this page got two more links under the "For more information" In Getting Data In: Overview of event processing Assign the correct source types to your data the clean links are: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/Overviewofeventprocessing https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/Setsourcetype when i was working on this last time, i remember that the links would have "latest" now it is hard-coded as "9.3.2"  so, if Splunk releases 9.3.3, then, may i know if some Splunk Docs admin will manually edit/update these links, pls suggest, thanks. 
Hi @alec_stan , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @gcusello  Great thanks.
Hi @inessa40408 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hello. Thank you for your reply. You are right, I have given little information.   We have a Windows devices. Theses devices have a limited network map. It does not save log files regarding all c... See more...
Hello. Thank you for your reply. You are right, I have given little information.   We have a Windows devices. Theses devices have a limited network map. It does not save log files regarding all connections to the WiFi network. The only way to get this information is to go to the CMD, run the command: netsh wlan show wlanreport and then this report will be saved in the folder: C:\ProgramData\Microsoft\Windows\WlanReport\wlan-report-latest.html.   But, this script saves the report only after manual entry on the device. We need this report to be saved constantly.   For example, with a frequency of: once an hour. So that later this file could be loaded into splunk to analyze the operation and connection to the WiFi network. Yes, of course, we would like to have more information from this device, such as: signal strength of the equipment, connection breaks, ping failed, MAC addresses of access points to which the device connects. But for now this is not a priority, as I would like to automate saving the LOG file to a specific folder.   I will be so thankfull if you have any ideas or advice on this matter, I would be grateful for the advice   If you have any clarifying questions: do not hesitate to ask me.   Thanks in advance for your answer
Hello.   Thank you for your reply.   You are right, I have given little information.   We have a Windows devices. Theses devices have a limited network map. It does not save log files regarding... See more...
Hello.   Thank you for your reply.   You are right, I have given little information.   We have a Windows devices. Theses devices have a limited network map. It does not save log files regarding all connections to the WiFi network. The only way to get this information is to go to the CMD, run the command: netsh wlan show wlanreport and then this report will be saved in the folder: C:\ProgramData\Microsoft\Windows\WlanReport\wlan-report-latest.html. But, this script saves the report only after manual entry on the device. We need this report to be saved constantly.   For example, with a frequency of: once an hour. So that later this file could be loaded into splunk to analyze the operation and connection to the WiFi network. Yes, of course, we would like to have more information from this device, such as: signal strength of the equipment, connection breaks, ping failed, MAC addresses of access points to which the device connects. But for now this is not a priority, as I would like to automate saving the LOG file to a specific folder.   I will be so thankfull if you have any ideas or advice on this matter, I would be grateful for the advice.   If you have any clarifying questions: do not hesitate to ask me.   Thanks in advance for your answer
Hi @alec_stan , Splunk hasn't any kind of clustering at Forwarders level, you have to configure your DS to deploy the same configurations to all the HFs.  
Hi @gcusello  Thank you for quick response. That means we do not need to do any form of clustering. On our current setup, we have two Intermediate Forwarders and they do not store any copy of the d... See more...
Hi @gcusello  Thank you for quick response. That means we do not need to do any form of clustering. On our current setup, we have two Intermediate Forwarders and they do not store any copy of the data and no clustering. From what you are saying, we should deploy two new forwarders on the other site, configure all intermediate forwarders to now point to four intermediate forwarders (two on DC1, two on DC2). Thanks again.
Hi @alec_stan , it's surely useful to have at least one or two HFs in the secondary sites to have HA on all the layers of your infrastructure; the number depends on the traffic that they have to man... See more...
Hi @alec_stan , it's surely useful to have at least one or two HFs in the secondary sites to have HA on all the layers of your infrastructure; the number depends on the traffic that they have to manage. About DS, you can continue to have only one DS, it isn't mandatory to have a redundant infrastructure for this role, because, in case of fault of the primary site, the only limitation is that you cannot update your Forwarders for limited time. The opportunity of having a second DS is related to the number of Forwarders to manage or if you have a segregated network, it isn't related to HA. About the configuration of the Forwarders layer, you have to configure all of them to send their logs to all the HFs in auto load balancing mode and then Splunk will manager the data distribution and fail over. Ciao. Giuseppe
Good day Splunkers, We have two site/DCs, where one is production and the other a standby DR. In our current architecture, we  have intermediate forwarders that forwards the logs to Splunk Cloud. Al... See more...
Good day Splunkers, We have two site/DCs, where one is production and the other a standby DR. In our current architecture, we  have intermediate forwarders that forwards the logs to Splunk Cloud. All universal forwarders send metrics/logs to these intermediate forwarders. We also have a single deployment server. The architecture is as follows: UF -> IF -> SH (Splunk cloud) The intermediate forwarders are Heavy Forwarders, they do some indexing, and some data transformation such as anonymizing data. The search head is on the cloud. We have been asked to move from the current production-DR architectural setup to an multi-site (active-active) setup. The requirement is for both DCs to be active and servicing customers at the same time. What is your recommendation in terms of setting up the forwarding layer? Is it okay to provision two more intermediate forwarders on the other DC and have all universal forwarders send to all intermediate forwarders across the two DCs. Is there a best practice that you can point me towards. Furthermore, do we need more deployment servers. Extra Info: The network team is about to complete network migration to Cisco ACI.
Along with what @SanjayReddy shared, https://www.splunk.com/en_us/training/certification.html would also help to look over different certifications (individual links have respective prerequisites.
How to create custom heatmap to project the overall health of all the applications deployed by platform and region vice?   which metrics we can used to project the overall application in Splunk obs... See more...
How to create custom heatmap to project the overall health of all the applications deployed by platform and region vice?   which metrics we can used to project the overall application in Splunk observability cloud. in RUM, we have only country property .Using that we are able to split application by country & environment vice. need to split by platform & region vice.      
how to create chart for Alert/Detector status to showcase overall health of application?   1.how may alerts configured for each application? 2.staus of alerts by severity    what is the metrics ... See more...
how to create chart for Alert/Detector status to showcase overall health of application?   1.how may alerts configured for each application? 2.staus of alerts by severity    what is the metrics available to showcase the above usecase in overall health dashboard in splunk observability cloud