All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Yann.Buccellato, are you using the on-prem or SaaS version of the product and docs? * On-prem doc: https://docs.appdynamics.com/appd/onprem/latest/en/extend-appdynamics/integration-modules/integra... See more...
@Yann.Buccellato, are you using the on-prem or SaaS version of the product and docs? * On-prem doc: https://docs.appdynamics.com/appd/onprem/latest/en/extend-appdynamics/integration-modules/integrate-appdynamics-with-servicenow-cmdb-and-event-management * SaaS doc: https://docs.appdynamics.com/appd/23.x/latest/en/extend-appdynamics/integration-modules/integrate-appdynamics-with-servicenow-cmdb-and-event-management We're looking into the issue now, but it'll be helpful to understand what deployment you're using and what docs you're looking at. Thanks!
Hi Giuseppe, Thank you for your response. I also have no idea why an input.conf file was created or how it was created. I will test to see if my deployment server can push out an empty input.conf ... See more...
Hi Giuseppe, Thank you for your response. I also have no idea why an input.conf file was created or how it was created. I will test to see if my deployment server can push out an empty input.conf file to that folder, otherwise I might just have to use PowerShell to just delete and replace that file on our hosts. To make it clear, this behavior is unusual right?
Im trying to break out the comma separated values in my results but im brain farting. I want to break out the specific reasons - {New Geo-Location=NEGATIVE, New Device=POSITIVE, New IP=NEGATIVE, New ... See more...
Im trying to break out the comma separated values in my results but im brain farting. I want to break out the specific reasons - {New Geo-Location=NEGATIVE, New Device=POSITIVE, New IP=NEGATIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE} index="okta" actor.alternateId="*mydomain*" outcome.reason=*CHALLENGE* client.geographicalContext.country!="" actor.displayName!="Okta System" AND NOT "okta_svc_acct" | bin _time span=45d | stats count by outcome.reason, debugContext.debugData.behaviors | sort -count outcome.reason debugContext.debugData.behaviors Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=NEGATIVE, New Device=POSITIVE, New IP=NEGATIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE} Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=NEGATIVE, New Device=NEGATIVE, New IP=NEGATIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE} Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=NEGATIVE, New Device=POSITIVE, New IP=POSITIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE} Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=POSITIVE, New Device=POSITIVE, New IP=POSITIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=POSITIVE} Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=NEGATIVE, New Device=NEGATIVE, New IP=POSITIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE}
this worked - thanks.  lol I was looking for the differences between the previous versions of sendemail.  I was hoping that 9.1.2 would have gotten deployed sooner than later, but this will work unti... See more...
this worked - thanks.  lol I was looking for the differences between the previous versions of sendemail.  I was hoping that 9.1.2 would have gotten deployed sooner than later, but this will work until then.
Probably because you didn't say you wanted "*" and you are probably missing some backslashes - try this <input type="checkbox" token="checkbox" id="checkABC"> <label></label> <choice... See more...
Probably because you didn't say you wanted "*" and you are probably missing some backslashes - try this <input type="checkbox" token="checkbox" id="checkABC"> <label></label> <choice value="*">All</choice> <choice value="AA">AA</choice> <choice value="BB">BB</choice> <choice value="CC">CC</choice> <change> <condition match="match($checkbox$,&quot;\\*&quot;)"> <unset token="A"></unset> <unset token="B"></unset> <unset token="C"></unset> <set token="form.checkbox">*</set> </condition> <condition> <eval token="A">if(match($checkbox$,"AA"),"A",null())</eval> <eval token="B">if(match($checkbox$,"BB"),"B",null())</eval> <eval token="C">if(match($checkbox$,"CC"),"C",null())</eval> </condition> </change> <default>AA,BB,CC</default> <initialValue>AA,BB,CC</initialValue> <delimiter>,</delimiter> </input>  
Sorry, I meant to say that the size of indexes (index1, index2, index 3, and so on) all together sums upto 250 GB. But the sizing case with datamodels was 250 Gb for 1 on them, 11GB of another, some ... See more...
Sorry, I meant to say that the size of indexes (index1, index2, index 3, and so on) all together sums upto 250 GB. But the sizing case with datamodels was 250 Gb for 1 on them, 11GB of another, some megabytes for the next one., and so on. Actually, the datamodel has only the requested field accelrated but the summary range is 1 year. This obviously makes sense for the growing size of data models.   Thanks, Pravin
but then it won't be by time also , no ?
Hi @sarit_s  chart command will not work with multiple fileds , try using stats 
Hi @sarit_s, in the chart command you can use only one field for the OVER or the BY option, you cannot use two fields. the only way (if acceptable) is concatenate the two fields in one: | eval Col... See more...
Hi @sarit_s, in the chart command you can use only one field for the OVER or the BY option, you cannot use two fields. the only way (if acceptable) is concatenate the two fields in one: | eval Column=UserAgent."|".LoginType | chart values(SuccessRatioBE) AS SuccessRatioBE over _time BY Column  Ciao. Giuseppe
Hi to all, I'm a newbee in Splunk and I need to check If the Splunk Cloud is receiving traffic form our network infrastructure. I have thought to do via API request but I don't find the url where to... See more...
Hi to all, I'm a newbee in Splunk and I need to check If the Splunk Cloud is receiving traffic form our network infrastructure. I have thought to do via API request but I don't find the url where to do the request. Could anybody to send me where I can find documentation to do this??? Or how can I do this?? Thanks in advance!! David.
Hello Im trying to run a chart command grouped by 2 fields but im getting an error: this is my query :   | chart values(SuccessRatioBE) as SuccessRatioBE over _time by UserAgent LoginType ... See more...
Hello Im trying to run a chart command grouped by 2 fields but im getting an error: this is my query :   | chart values(SuccessRatioBE) as SuccessRatioBE over _time by UserAgent LoginType   and im getting this error : "Error in 'chart' command: The argument 'LoginType' is invalid." I also tried with comma to separate between the fields and ticks also
In the example, the lexicographic order will process the transformations in the same order, as TRANSFORMS-1 comes before TRANSFORMS-2 (and so on).  
hi @cmlombardo , the order of transformation is relevant! only for example, if you read at https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Routeandfilterdatad#Filter_event_data_and_se... See more...
hi @cmlombardo , the order of transformation is relevant! only for example, if you read at https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues , if you want to keep some events and discard the rest, you have to executebefore the transofrmation on all the events (regex=.) and then the tranformation of a part of data, if you change the order, the tranformation doesn't work. Ciao. Giuseppe
It's not necessarily true that the data will be the same after each example.  In some (many/most?) cases, the order of transformations could be significant and is why I recommend using the second for... See more...
It's not necessarily true that the data will be the same after each example.  In some (many/most?) cases, the order of transformations could be significant and is why I recommend using the second format.
So, if they are processed in lexicographical order, then the result should be the same once the data passes through my 2 transformation examples. Best practice, as I understand it, is to list the tr... See more...
So, if they are processed in lexicographical order, then the result should be the same once the data passes through my 2 transformation examples. Best practice, as I understand it, is to list the transformations in the second form TRANFORMS=tr1,tr2,tr3 so that there is no doubt on the order they are processed.  
Hi @_pravin , the disk space used for accelerated Data Models is usually calculated with this formula: disk_space = dayly_used_license * 3.4 this formula is described in the Splunk Architecting tr... See more...
Hi @_pravin , the disk space used for accelerated Data Models is usually calculated with this formula: disk_space = dayly_used_license * 3.4 this formula is described in the Splunk Architecting training course. So it's very strange that you have 250GB of index and 250 GB of Data Model. This is possible only if you configured in your Data Model also the _raw field and this isn't a best practice becase in a Data Model you should have only the fields requested in your searches, not all the _raw of all events. Ciao. Giuseppe
Make sure you've SSL configured correctly on appropriate Splunk server. Use this link as reference: https://docs.splunk.com/Documentation/Splunk/9.1.1/Security/ConfigureSplunkforwardingtousesignedcer... See more...
Make sure you've SSL configured correctly on appropriate Splunk server. Use this link as reference: https://docs.splunk.com/Documentation/Splunk/9.1.1/Security/ConfigureSplunkforwardingtousesignedcertificates
Hi @gcusello ,   Our datamodels don't use the same space as in the index so the accelerated data don't have a cap on the limit. I really liked your extended answer but could you please explain the... See more...
Hi @gcusello ,   Our datamodels don't use the same space as in the index so the accelerated data don't have a cap on the limit. I really liked your extended answer but could you please explain the line below in quotes, I find it a bit confusing. "Usually the space occupation for one year of an accelerated DataModel is around the daily license consuption for that index moltiplicated for 3.4."   Regards, Pravin  
As I understand it, transforms listed separately are processed in lexicographical order whereas those listed in a single TRANSFORMS setting are processed in the order given. IOW, this [test] TRANSF... See more...
As I understand it, transforms listed separately are processed in lexicographical order whereas those listed in a single TRANSFORMS setting are processed in the order given. IOW, this [test] TRANSFORMS-3=tr3 TRANSFORMS-1=tr1 TRANSFORMS-2=tr2 is the same as this [test] TRANSFORMS-1=tr1 TRANSFORMS-2=tr2 TRANSFORMS-3=tr3 But this TRANSFORMS-1=tr3,tr1,tr2 will process the transforms in the listed order.
Hello,   I hope everything is okay.   I need your help.   I am using this spl request : "index="bloc1rg" AND libelle IN (IN_PREC, OUT_PREC, IN_BT, OUT_BT, IN_RANG, OUT_RANG) earliest=-1... See more...
Hello,   I hope everything is okay.   I need your help.   I am using this spl request : "index="bloc1rg" AND libelle IN (IN_PREC, OUT_PREC, IN_BT, OUT_BT, IN_RANG, OUT_RANG) earliest=-1mon@mon latest=-1d@d | append [search index="bloc1rg" AND libelle IN (IN_PREC, OUT_PREC, IN_BT, OUT_BT, IN_RANG, OUT_RANG) earliest=-1mon@mon latest=-1d@d | chart count over id_flux by libelle | eval IN_BT_OUT_BT=IN_BT+OUT_BT | eval IN_PREC_OUT_PREC=IN_PREC+OUT_PREC | eval IN_RANG_OUT_RANG=IN_RANG+OUT_RANG | where IN_BT_OUT_BT>=2 | where IN_PREC_OUT_PREC >=2 | where IN_RANG_OUT_RANG >=2 | transpose | search column=id_flux | transpose | fields - "column" | rename "row 1" as id_flux] | stats last(_time) as last_time by id_flux libelle "   I have this results : I can't get what I want. Let me explain. For a given id_flux, I'd like to have the response time defined as follows: - out_rang time - in_rang time - time out_prec - time in_prec -time out_bt - time in_bt     Voilà ce que j'ai utilisé comme requête complète : search index="bloc1rg" AND libelle IN (IN_PREC, OUT_PREC, IN_BT, OUT_BT, IN_RANG, OUT_RANG) earliest=-1mon@mon latest=-1d@d |chart count over id_flux by libelle |eval IN_BT_OUT_BT=IN_BT+OUT_BT |eval IN_PREC_OUT_PREC=IN_PREC+OUT_PREC |eval IN_PREC_OUT_PREC=IN_PREC+OUT_PREC | eval IN_RANG_OUT_RANG=IN_RANG+OUT_RANG |where IN_BT_OUT_BT>=2 |where IN_PREC_OUT_PREC >=2 |where IN_RANG_OUT_RANG >=2 |transpose |search column=id_flux |transpose |fields - "column" |rename "row 1" as id_flux] | eval sortorder=case(libelle=="IN_PREC",1,libelle=="OUT_PREC" AND statut=="KO",2,libelle=="OUT_PREC" AND statut=="OK",3,libelle=="IN_BT",4,libelle=="OUT_BT",5, libelle=="IN_RANG",6, libelle=="OUT_RANG" AND statut=="KO",7, libelle=="OUT_RANG" AND statut=="OK",8) | sort 0 sortorder |eval libelle=if(sortorder=2,"ARE", (if (sortorder=3,"AEE", (if(sortorder=7, "BAN",(if(sortorder=8, "CCO", libelle))))))) |table libelle sortorder _time |chart avg(_time) over sortorder by libelle | filldown AEE, ARE, IN_BT, IN_PREC, OUT_BT, IN_RANG, OUT_RANG |eval OK=abs(OUT_BT-IN_BT)/1000 |eval AEE=abs(AEE-IN_PREC)/1000 |eval ARE=abs(ARE-IN_PREC)/1000 |eval CCO=abs(CCO-IN_RANG) |eval BAN=abs(BAN-IN_RANG) |fields - sortorder |stats values(*) as * |table AEE ARE BAN CCO OK |transpose |rename "row 1" as "temps de traitement (s)" |rename column as "statut"