All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @isoutamo Hello @yuanliu, thank you for your reply. At the moment i use the "coalesce" to quick fix the issue but i think in the long run in will do implement the lookup solution.  Thank... See more...
Hello @isoutamo Hello @yuanliu, thank you for your reply. At the moment i use the "coalesce" to quick fix the issue but i think in the long run in will do implement the lookup solution.  Thank you both for your help! Kind regards, Flenwy
HI My indexer has to forward some logs to Qradar to 2 different ports: logs from index A > Qradar port 12468 logs from index B > Qradar port 514   regards, pawel
Hi, I am importing a csv file in Splunk Enterprise that has semicolon as field separator but Splunk does not correctly parses it. For instance this field --> SARL "LE RELAIS DU GEVAUDAN";;;"1 is co... See more...
Hi, I am importing a csv file in Splunk Enterprise that has semicolon as field separator but Splunk does not correctly parses it. For instance this field --> SARL "LE RELAIS DU GEVAUDAN";;;"1 is considered as a whole and is not getting splitted. Do you know which settings should I configure in the file importer wizard in order to import it? Thank you Kind regards Marta  
Hi one option is use several strategies which are pointing to different LDAP servers which have identical content. Another option is use LB before LDAPs and use this VIP address as server for strate... See more...
Hi one option is use several strategies which are pointing to different LDAP servers which have identical content. Another option is use LB before LDAPs and use this VIP address as server for strategy. That is probably more easier solution for overall. r. Ismo
Hi @GaetanVP , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hello there, Is there a solution for this question. We too (like many others i guess) have domains with multiple LDAP servers behind. Either we register per domain several strategies what gives us i... See more...
Hello there, Is there a solution for this question. We too (like many others i guess) have domains with multiple LDAP servers behind. Either we register per domain several strategies what gives us in the end about 15 strategies an more or we can solve with the DNS record for the Domain (example demo.domain.local). In my opinion Splunk will then connect to one of the multiple Servers behind this DNS record with Round Robin. What are the possibilities and how did you solve this? With so many strategies we have the problem that with an adjustment to roles with subsequent reload the whole thing with a search head cluster at the end goes very long. Clearly, the strategies here are only one part of many in a reload, and yet this would help us.
Hi Team, I am trying to monitor the .NET Windows service application and I have followed the instructions as per in the below link. https://docs.appdynamics.com/appd/23.x/latest/en/application-mo... See more...
Hi Team, I am trying to monitor the .NET Windows service application and I have followed the instructions as per in the below link. https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/install-app-server-agents/net-agent/install-the-net-agent-for-windows/configure-the-net-agent-for-windows-services-and-standalone-applications I am not a developer and don't have the source code (Namespace/class/functions) of my Windows service. So I couldn't add Custom Poco entry points. How do I discover my .NET functions from Windows Service?  Can anyone help me... Regards, Durai
Hi @Adpafer, let me understand because I don't understand your requirement: at first if you have one Indexer receiving on port xx and port yy, what do you mean that you need the Indexer forwardrs d... See more...
Hi @Adpafer, let me understand because I don't understand your requirement: at first if you have one Indexer receiving on port xx and port yy, what do you mean that you need the Indexer forwardrs data on the above ports? are you speaking of an Indexer or a Forwarder? or are you speaking of forwarding data to a third party? In other words, could you better describe your requirement, in terms od data flow? Ciao. Giuseppe
Hi @bowesmana , the method you mentioned is working fine.. Sorry for not mentioning before, the column VALUE can have data like {'a','b'} or a. In such cases how can i change the regex?
Hi @Ammar , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @anooshac, these seems to be data in json format. If they aren't  in a lookup, you could parse them and store in an index using the INDEXED_EXTRACTIONS=json option of props.conf (https://docs.sp... See more...
Hi @anooshac, these seems to be data in json format. If they aren't  in a lookup, you could parse them and store in an index using the INDEXED_EXTRACTIONS=json option of props.conf (https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Propsconf). In this way you extract all the fields and you can use the values for your matches. Ciao. Giuseppe
@abi2023 - You would use timechart instead of stats with span=1mon. But you are using by name, groupby field won't work with a single value chart. You could though a table with trendlines if you wan... See more...
@abi2023 - You would use timechart instead of stats with span=1mon. But you are using by name, groupby field won't work with a single value chart. You could though a table with trendlines if you want to use groupby. You can find example of this inside the App called Splunk Dashboard Examples on Splunkbase.   I hope this helps!!!
Where is your data coming from? Data has to be either  a) In a Splunk index b) In a lookup in Splunk c) Part of your search, as in my | makeresults example  
Just change the regex in the 2 rex statements | makeresults | eval _raw="Group;Value;Data {'Ala_ABC':'1','Bob_XX_':'2'};{'Ala_ABC','Bob_XX_'}; {'Ala_ABC':1,'Bob_XX_':'2'};{'Ala_ABC'};{'Bob_XX_'} {'A... See more...
Just change the regex in the 2 rex statements | makeresults | eval _raw="Group;Value;Data {'Ala_ABC':'1','Bob_XX_':'2'};{'Ala_ABC','Bob_XX_'}; {'Ala_ABC':1,'Bob_XX_':'2'};{'Ala_ABC'};{'Bob_XX_'} {'Ala_ABC':1,'Bob_XX_':'2','_c_is_for_Charlie':'3'};{'Ala_ABC'};{'Bob_XX_','_c_is_for_Charlie'}" | multikv forceheader=1 | table Group Value Data ``` This is your Splunk SPL ``` | rex field=Group max_match=0 "'(?<g>[A-Za-z_]+)':'" | rex field=Value max_match=0 "'(?<v>[A-Za-z_]+)'" | eval Calculated_Data=mvmap(g, if(g!=v, g, null())) | eval Calculated_Data="{'".mvjoin(Calculated_Data, "','")."'}" | fields - g v
This is the received payload. index=us_whcrm source=MuleUSAppLogs sourcetype= "bmw-crm-wh-xl-retail-amer-prd-api" ((severity=ERROR "Transatcion") OR (severity=INFO "Received Payload")) | rex fie... See more...
This is the received payload. index=us_whcrm source=MuleUSAppLogs sourcetype= "bmw-crm-wh-xl-retail-amer-prd-api" ((severity=ERROR "Transatcion") OR (severity=INFO "Received Payload")) | rex field=message "(?<json_ext>\{[\w\W]*\})" | table _time properties.correlationId json_ext | spath input=json_ext | rename properties.correlationId as correlationId processRetailDeliveryReporting.processRetailDeliveryReportingDataArea.retailDeliveryReporting.retailDeliveryReportingVehicleLineItem.vehicle.vehicleID as VinId | eval BMWUnit=replace(BMWUnit,"([file://w%7b3%7d)(/w%7b2%7d]\\w{3})(\\w{2})", \\1-\\2) | table _time correlationId BMWUnit dealerId Description VinId | stats earliest(_time) as _time values(*) as * by correlationId | where isnotnull(Description) I am using this query to get all the errors and their field details in the table and it is working but now there is one condition that I have to differentiate that error they are of two types one we can get from the flow end event [sync/c2v] which I shared. And these errors I am calculating from description field. what could I do chnage in my query to find the- error type.  
Should the csv file uploaded as lookup file? Can we avoid that? Also the strings can contain _ as well as they can be capital letters and small letters.. how can i do this? This [A-Za-z_] regular ex... See more...
Should the csv file uploaded as lookup file? Can we avoid that? Also the strings can contain _ as well as they can be capital letters and small letters.. how can i do this? This [A-Za-z_] regular expression is fine?
You can do it like this runnable example with your data - using from the rex statement | makeresults | eval _raw="Group Value Data {'a':'1','b':'2'} {'a',... See more...
You can do it like this runnable example with your data - using from the rex statement | makeresults | eval _raw="Group Value Data {'a':'1','b':'2'} {'a','b'} {'a':1,'b':'2'} {'a'} {'b'} {'a':1,'b':'2','c':'3'} {'a'} {'b','c'}" | multikv forceheader=1 | table Group Value Data ``` This is your Splunk SPL ``` | rex field=Group max_match=0 "'(?<g>\w)':" | rex field=Value max_match=0 "'(?<v>\w)'" | eval Calculated_Data=mvmap(g, if(g!=v, g, null())) | eval Calculated_Data="{'".mvjoin(Calculated_Data, "','")."'}" | fields - g v  So, if you have a CSV file with Group and Value in it, then | inputlookup your_csv.csv | rex field=Group max_match=0 "'(?<g>\w)':" | rex field=Value max_match=0 "'(?<v>\w)'" | eval Data=mvmap(g, if(g!=v, g, null())) | eval Data="{'".mvjoin(Data, "','")."'}" | fields - g v  
I have a CSV file which has a some columns. There is one column named GROUP and the data in that column are in the format {'a':1,'b':2}, there can be any number of strings. There is another column VA... See more...
I have a CSV file which has a some columns. There is one column named GROUP and the data in that column are in the format {'a':1,'b':2}, there can be any number of strings. There is another column VALUE and the data are in the format {'a','b'}. I want to check if the strings in VALUE column are present in GROUP column and create a separate column named DATA with the strings not present. I am not sure how to achieve it in Splunk using commands. Can anyone have any suggestions? Example: Group                     Value              Data {'a':'1','b':'2'}            {'a','b'} {'a':1,'b':'2'}              {'a'}                {'b'} {'a':1,'b':'2','c':'3'}    {'a'}                {'b','c'} There are many columns like these and strings present in GROUP column can be more.  
i got this error failed to: delete_local_spark_dirs on and failed to:force_kill_spark_jvms on when i run /opt/caspida/bin/Caspida start-all . Any idea how i can resolve this?   -I was not able to a... See more...
i got this error failed to: delete_local_spark_dirs on and failed to:force_kill_spark_jvms on when i run /opt/caspida/bin/Caspida start-all . Any idea how i can resolve this?   -I was not able to access the web ui and I was running the cmd (on UBA manager) /opt/caspida/bin/Caspida stop-all . There was an error . And when I tried to run the start-all, it shows the same error.
you can achieve this by modifying the inputs.conf and output.conf. Can you follow the below steps   your input config should be  [monitor:///path/to/data1] disabled = false index = your_index1 ... See more...
you can achieve this by modifying the inputs.conf and output.conf. Can you follow the below steps   your input config should be  [monitor:///path/to/data1] disabled = false index = your_index1 sourcetype = your_sourcetype1 [tcpout] defaultGroup = your_index1_group [tcpout:your_index1_group] server = 10.10.10.10:xx [monitor:///path/to/data2] disabled = false index = your_index2 sourcetype = your_sourcetype2 [tcpout:your_index2_group] server = 10.10.10.10:yy   --------------------- and output.conf is below   [tcpout] defaultGroup = default-autolb-group [tcpout-server://localhost:PORT1] compressed = false [tcpout-server://localhost:PORT2] compressed = false