All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

what i really want is  This is  query 1  - output ------------------------------- (eventtype =axs_event_txn_visa_req_parsedbody "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] D... See more...
what i really want is  This is  query 1  - output ------------------------------- (eventtype =axs_event_txn_visa_req_parsedbody "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] DATA[01]") | rex field=_raw "(?s)(.*?FLD\[Acquiring Institution.*?DATA\[(?<F19>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[Authentication Program.*?DATA\[(?<FCO>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<F62_2>[^\]]*).*)" | stats values(F19) as F19, values(FCO) as FCO by F62_2 | where F19!=036 AND FCO=01   F62_2 F19 FCO 384011068172061 840 1 584011056069894 826 1   Query 2 eventtype=axs_event_txn_visa_rsp_formatting | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<F62_2>[^\]]*).*)" | stats values(txn_uid) as txn_uid, values(txn_timestamp) as txn_timestamp, by F62_2 What I really want is the output of the for query 1 and pass as an input to query, common field between two queries is F62_2. if i run the query it would be different output, so basically two queries should be combined and when it run it should take from F62_2 from query 1 and produce values(txn_uid) as txn_uid, values(txn_timestamp) as txn_timestamp
ok. Thanks. Would you please share your thoughts on how to merge the two queries
Hi Team,   In role we are providing the user as read only access, and set up the capabilities, Inheritress , resources, and restrictions.  But that user able to delete the query and delete the repo... See more...
Hi Team,   In role we are providing the user as read only access, and set up the capabilities, Inheritress , resources, and restrictions.  But that user able to delete the query and delete the report also, how do we hide delete option in the report?   Please guide the process.        
 Hi,  Does anyone have experience in monitoring Azure Integration Services with AppDynamics? Suggestions on a setup would be appreciated. The services will be calling an on-premise .NET application ... See more...
 Hi,  Does anyone have experience in monitoring Azure Integration Services with AppDynamics? Suggestions on a setup would be appreciated. The services will be calling an on-premise .NET application the ability to drilldown downstream is not a must but would be really nice to have. br Kjell Lönnqvist
2. Ensure 'nimalert' is stored under /opt/nimbus/bin/nimalarm, if not, you can change the path in TA-nimbusalerting/bin/nimbus.sh.  The add-on is written by some private person (there is a gmail ad... See more...
2. Ensure 'nimalert' is stored under /opt/nimbus/bin/nimalarm, if not, you can change the path in TA-nimbusalerting/bin/nimbus.sh.  The add-on is written by some private person (there is a gmail address on the Contact tab in Splunkbase) and is not very popular so it's hard to tell without deeper debugging what's going on. I'd start by looking into _internal for any logs containing "nimbus".
While @bowesmana 's solution is correct, it might not be the fastest one If your data haven't already rolled over past retention date, you can see if the licensing report is enough for you (but a... See more...
While @bowesmana 's solution is correct, it might not be the fastest one If your data haven't already rolled over past retention date, you can see if the licensing report is enough for you (but as far as I remember it's either by host or by index). Unfortunately, if you want to measure the size of raw data (which is what you're asking about), you need to read all the raw data back from the time period you need to analyze. Which is gonna be painfully slow if your environment is of any decent size.
Ugh. Docker But seriously, first things first. Check with normal openssl whether you can properly connect to the server. If not, then problems are on the server's side. If yes, then on the client... See more...
Ugh. Docker But seriously, first things first. Check with normal openssl whether you can properly connect to the server. If not, then problems are on the server's side. If yes, then on the client's side. openssl s_client -connect splunk.your.org.domain:8089 -CAfile path_to/your_rootCA.pem  
How did you proceed further? We're also looking at integrating backbase with Splunk for logging and monitoring purposes.
Hi @anooshac , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poi... See more...
Hi @anooshac , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Unfortunately, Splunk cannot automatically extract MSG_DATA correctly because the XML document contains double quote.  If MSG_DATA is always the last field in the event, you can use   | eval MSG_DA... See more...
Unfortunately, Splunk cannot automatically extract MSG_DATA correctly because the XML document contains double quote.  If MSG_DATA is always the last field in the event, you can use   | eval MSG_DATA = replace(_raw, ".+,\s*MSG_DATA=\"|\"$", "") | spath input=MSG_DATA path=Message.additionalInfo.fileDetails.fileDetail.title | table Message.additionalInfo.fileDetails.fileDetail.title   Your sample data (which includes an invalid fragment that I remove) results in Message.additionalInfo.fileDetails.fileDetail.title FABDC REDS Letter 11-222-333   Normally, I advise against treating structured data as text.  But if you cannot be certain that MSG_DATA is the last field and cannot be certain of the exact terms that follows MSG_DATA, rex as @richgalloway suggested would be more stable.
@jayeshrajvir wrote: append and appendcol simply appending the query its like a glue. Please correct me if i am wrong   Not quite right - append adds events to the event pipeline, appendcols a... See more...
@jayeshrajvir wrote: append and appendcol simply appending the query its like a glue. Please correct me if i am wrong   Not quite right - append adds events to the event pipeline, appendcols adds fields to existing event i.e. append is vertical "glue" whereas appendcols is horizontal "glue" For completeness, appendpipe is also vertical "glue" but it uses the existing events pipeline as its base data rather than a new search
Hi All, I have a dashboard which has 3 radio buttons both,TypeA and TypeB. Also i have a table. The requirement is that, if i select both or TypeA in radio buttons, columnA and columnB in the table ... See more...
Hi All, I have a dashboard which has 3 radio buttons both,TypeA and TypeB. Also i have a table. The requirement is that, if i select both or TypeA in radio buttons, columnA and columnB in the table should be highlighted. If i select the TypeB, only columnA should be highlighted. How can i achieve this? I have tried using color palette expression like below. But no luck. Anyone have solution for this? <format type="color" field="columnA"> <colorPalette type="list">["#00FFFF"]</colorPalette> </format> <format type="color" field="columnB"> <colorPalette type="expression">if(match(Type,"TypeB")," ", "#00FFFF")</colorPalette> </format>
Thank you @gcusello , I'll try using lookup.
As you noted that "someLog" is just a text identifier to connect the two sets.  I deduce that "consistencies" and "inconsistencies" are also mere text identifiers, not associated with a specific fiel... See more...
As you noted that "someLog" is just a text identifier to connect the two sets.  I deduce that "consistencies" and "inconsistencies" are also mere text identifiers, not associated with a specific field. If this is correct, your problem can be clarified as: Find values of someField that only occurs in events that contains identifier term "inconsistencies" and that do not contain identifier term "consistencies".  This way, it is easy to translate into SPL: sourcetype="my_source" someLog (consistencies OR inconsistencies) | eval consistent_or_not = if(searchmatch("consistencies"), "consistent", "inconsistent") | stats values(someField) as someField by consistent_or_not | stats values(consistent_or_not) as consistent_or_not by someField | where mvcount(consistent_or_not) < 2 AND consistent_or_not == "inconsistent" Hope this helps.
No. I assume that there is simply no ?auto_extract_timestamp=true functionality for versions before 8.0. After that - it works OK.
Depends on the actual use case - the data you have and the desired output. You already had one example in this thread from @ITWhisperer .
Hi Based on your error message it's related to network connection. Just check both host and network based FWs to see that everything is ok. If I understand you already fixed this on your FW side? S... See more...
Hi Based on your error message it's related to network connection. Just check both host and network based FWs to see that everything is ok. If I understand you already fixed this on your FW side? Should you use HF as a HUB/consentrator is totally dependent on your security policy. If you have strictly security zone based architecture (don't allowed direct connection to outside) then you definitely need an intermediate forwarders. But if not then those just create more complexity on your environment and don't give to best perfomance for you. If you have lot of UFs and haven't any other configuration management software/service/system then you should use DS and if you have already something in place then you should use it instead of bring totally new way to do it. r. Ismo
Hello, Firstly, requirement is we want to monitor the docker containers present on the server, and we were tried aproch to istrument our machine agent inside each docker container, but by this aproc... See more...
Hello, Firstly, requirement is we want to monitor the docker containers present on the server, and we were tried aproch to istrument our machine agent inside each docker container, but by this aproch our docker image is going to heavy and our application performace may decrease beacause of this approch. So, we had instrumented machine agent on docker container, which is present on that local server and that machine agent correctly working and also providing metrics for some containers but not for all containers, so can anyone help me to solve this issue. we have take reffernce from the github repository(https://github.com/Appdynamics/docker-machine-agent.git), but in our environment there are 40 containers and by this method it is monitoring only 9 containers so can anyone help me to solve this issue. here you can see only 9 containers. Regards, Dishant
Hi Splunk Experts, We are trying to integrate CA UIM with Splunk to send Splunk alerts to CA UIM. So, we had installed Nimbus (CA UIM) add-on and configured Alert to trigger events and also we had i... See more...
Hi Splunk Experts, We are trying to integrate CA UIM with Splunk to send Splunk alerts to CA UIM. So, we had installed Nimbus (CA UIM) add-on and configured Alert to trigger events and also we had installed nimbus agent on the Splunk enterprise server where is was deployed on Linux x64 as per the instructions but no alerts are triggering for search even if the condition match.  but when we are checking manually we can see many triggered alerts under trigger section. So, can any one suggest what could be the issue and suggest me to resolve it. Below is the search and alert configuration.   Thank you in advance. Regards, Eshwar    
@bowesmana  Sure thing, I have 2 problems. 1st I would like to add more than one page to my dashboard. So I want a single dashboard page where I have multiple pages organized similarly to tabs o... See more...
@bowesmana  Sure thing, I have 2 problems. 1st I would like to add more than one page to my dashboard. So I want a single dashboard page where I have multiple pages organized similarly to tabs on a browser.  2nd I would like one of the tabs on the same page will redirect the page to another splunk dashboard in a different app. This is why I am asking to know if It is possible or if I will have to clone the dashboard into one of the tabs.  Thanks!