All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I have a connection on Splunk DB Connect on my HF (connected to my SH and I know connection is stable and other sources reach my SH from the HF)  but data is not populated on my index (I also t... See more...
Hi,  I have a connection on Splunk DB Connect on my HF (connected to my SH and I know connection is stable and other sources reach my SH from the HF)  but data is not populated on my index (I also tried connecting to a new index=database on my SH and HF and restarting and did not work)  
Hi all We get this error: Analytics service unavailable: Host "10.10.240.102" returned code 401 with message 'Status code: [401], Message: HTTP 401 Unauthorized'. Please contact support if this err... See more...
Hi all We get this error: Analytics service unavailable: Host "10.10.240.102" returned code 401 with message 'Status code: [401], Message: HTTP 401 Unauthorized'. Please contact support if this error persists. Even though I make sure that: analytics.accountAccessKey is the same as ad.accountmanager.key.eum is the same as appdynamics.es.eum.key from the admin console.
App TA_MongoDB_Atlas (6238) pages not loading after migration for 9,1.2.
Hi Everyone, We`ve created a new TA to get data in from an API - this was done on the HF and the data is being sent to our Cloud instance, however the field values are getting duplicated. Tried cha... See more...
Hi Everyone, We`ve created a new TA to get data in from an API - this was done on the HF and the data is being sent to our Cloud instance, however the field values are getting duplicated. Tried changing the INDEXED_EXTRACTIONS and KV_MODE settings on the HV as explained by many others without success. In Cloud there wasn`t a source type for this data feed, so we`ve created one manually and set INDEXED_EXTRACTIONS = none and KV_MODE = json however this made no change.  I`ve also added a stanza in local.meta on the HF as suggested by others as follows: export = system. Here`s a snap of the sourcetype stanza on the HF. As you can see INDEXED_EXTRACTIONS  and KV_MODE  are both set to false, but I`ve tried pretty much every combination possible - which suggests to me the issue is in the Cloud.   ADD_EXTRA_TIME_FIELDS = True ANNOTATE_PUNCT = True AUTO_KV_JSON = false BREAK_ONLY_BEFORE = BREAK_ONLY_BEFORE_DATE = CHARSET = UTF-8 DATETIME_CONFIG = CURRENT DEPTH_LIMIT = 1000 DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false HEADER_MODE = INDEXED_EXTRACTIONS = none KV_MODE = none LB_CHUNK_BREAKER_TRUNCATE = 2000000 LEARN_MODEL = true LEARN_SOURCETYPE = true LINE_BREAKER = ([\r\n]+) LINE_BREAKER_LOOKBEHIND = 100 MATCH_LIMIT = 100000 MAX_DAYS_AGO = 2000 MAX_DAYS_HENCE = 2 MAX_DIFF_SECS_AGO = 3600 MAX_DIFF_SECS_HENCE = 604800 MAX_EVENTS = 256 MAX_TIMESTAMP_LOOKAHEAD = 128 MUST_BREAK_AFTER = MUST_NOT_BREAK_AFTER = MUST_NOT_BREAK_BEFORE = NO_BINARY_CHECK = true SEGMENTATION = indexing SEGMENTATION-all = full SEGMENTATION-inner = inner SEGMENTATION-outer = outer SEGMENTATION-raw = none SEGMENTATION-standard = standard SHOULD_LINEMERGE = 0 TIME_FORMAT = TRANSFORMS = TRUNCATE = 10000 category = Structured detect_trailing_nulls = false disabled = false maxDist = 100 priority = pulldown_type = 1 sourcetype = termFrequencyWeightedDist = false    Any help would be greatly appreciated.
So I have my application set up on my controller and I'm able to see every workload I'm sending to that Controller application. I have quite a lot of workloads that I'm running and every time I run ... See more...
So I have my application set up on my controller and I'm able to see every workload I'm sending to that Controller application. I have quite a lot of workloads that I'm running and every time I run them I have to get on that application and look at the Application flow to validate that my workloads are showing up as expected and that call counts for each node are accurate (regardless of error or success). I was looking for a method to automate the task of having to open the Application every time to validate the flow map and call counts, it could be an API that would get the Application flow map metrics. Is there such an API that will help me with this? I was looking at the AppDynamics documentation and the discussions but didn't get anything related to this.   In short, I want to get the "application flow map" data (that can be seen on the Application dashboard) from AppDynamics APIs. Thanks in Advance, Surafel
As the title suggests, I got some SSL certs from my teams, but because the default SSL port is 8443, it's not recognizing the certificates.  I'm kind of a noob to certificates, though, so I hope I'm... See more...
As the title suggests, I got some SSL certs from my teams, but because the default SSL port is 8443, it's not recognizing the certificates.  I'm kind of a noob to certificates, though, so I hope I'm explaining it right. 
I have a json which I need help with breaking into key value pair.          "lint-info": { "-Wunused-but-set-variable": [ { "location": { ... See more...
I have a json which I need help with breaking into key value pair.          "lint-info": { "-Wunused-but-set-variable": [ { "location": { "column": 58, "filename": "ab1", "line": 237 }, "source": "logic [MSGG_RX_CNT-1:0][MSGG_RX_CNT_MAXWIDTH+2:0] msgg_max_unrsrvd_temp; // temp value including carry out", "warning": "variable 'msgg_max_unrsrvd_temp' is assigned but its value is never used" }, { "location": { "column": 58, "filename": "ab2", "line": 254 }, "source": "logic msgg_avail_cnt_err; // Available Counter update error detected", "warning": "variable 'msgg_avail_cnt_err' is assigned but its value is never used" } ], "-Wunused-genvar": [ { "location": { "column": 11, "filename": "ab3", "line": 328 }, "source": "genvar nn,oo;", "warning": "unused genvar 'oo'" } ], "total": 3, "types": [ "-Wunused-but-set-variable", "-Wunused-genvar" ] },           I need to get a table with Type, filename, line values like below   Type                                                  Filename       Line           -Wunused-but-set-variable.    ab1.                   237 -Wunused-but-set-variable.    ab2                 254 -Wunused-genvar                        ab3              328     Thanks    
Running the search below gives me a horizontal list of the fields and values where I scroll left to right. How do you change the results to list the fields and values vertically, where I scroll down?... See more...
Running the search below gives me a horizontal list of the fields and values where I scroll left to right. How do you change the results to list the fields and values vertically, where I scroll down? | rest /services/data/indexes splunk_server=* | where title = "main"
i have a splunk query below that returns me  ( ( ( list_value2="dev1" OR list_value2="dev2" OR list_value2="dev5" OR list_value2="dev6" ) ) ) i want to use this 4 values as a list to query using ... See more...
i have a splunk query below that returns me  ( ( ( list_value2="dev1" OR list_value2="dev2" OR list_value2="dev5" OR list_value2="dev6" ) ) ) i want to use this 4 values as a list to query using IN operation from another main search as show in the second code snippet. ``` index=main label=y userid=tom | fields associateddev | eval list_value = replace(associateddev,"{'","") | eval list_value = replace(list_value,"'}","") | eval list_value = split(list_value,"', '") | mvexpand list_value | stats values(list_value) as list_value2 | format ``` i want to use the results from this as part of a subsearch to query another source as shown below. ideally, the subsearch will return me a list that i can just call using | where hname IN list_value2. But list_value2 is returning me this ( ( ( list_value2="dev1" OR list_value2="dev2" OR list_value2="dev5" OR list_value2="dev6" ) ) ) weird string. ``` index="main" label=x | where hname IN [search index=main label=y userid=tom | fields associateddev | eval list_value = replace(associateddev,"{'","") | eval list_value = replace(list_value,"'}","") | eval list_value = split(list_value,"', '") | mvexpand list_value | stats values(list_value) as list_value2] | table _time, hname, list_value2 ``` i have tried  | stats values(list_value) as search | format mvsep="," "" "" "" "" "" ""] but i still get the error: Error in 'search' command: Unable to parse the search: Right hand side of IN must be a collection of literals. '(dev1 dev2 dev5 dev6)' is not a literal.
Hello to everyone! I have a curious situation: I have log files that I collecting via SplunkUF This log file does not contain a whole timestamp - one part of the timestamp is contained in the file... See more...
Hello to everyone! I have a curious situation: I have log files that I collecting via SplunkUF This log file does not contain a whole timestamp - one part of the timestamp is contained in the file name, and the other is placed directly in the event As I found in the other answers, I have options. 1. INGEST_EVAL on the indexer layer: I did not understand how I could take one part from the source and glue it with _raw data Link to the answer 2. Use handmade script to create a valid timestamp for events - this is more understandable for me, but it looks like "reinventing the wheel" So the question is, may I use the first option if it is possible? This is the an example of the source: E:\logs\rmngr_*\24020514.log * - some number 24 - Year Month - 02 Day - 04 Hour - 14 And this is an example of the event: 45:50.152011-0,CONN,3,process=rmngr,p:processName=RegMngrCntxt,p:processName=ServerJobExecutorContext,OSThread=15348,t:clientID=64658,t:applicationName=ManagerProcess,t:computerName=hostname01,Txt=Clnt: DstUserName1: user@domain.com StartProtocol: 0 Success 45:50.152011 - Minute, Second and Subsecond  
Hello,    I have a question on a spl request.  I have those extracted fields about the entry data.   I used this spl request :  | union maxtime=500 timeout=500 [ search index="idx_arv_ach_a... See more...
Hello,    I have a question on a spl request.  I have those extracted fields about the entry data.   I used this spl request :  | union maxtime=500 timeout=500 [ search index="idx_arv_ach_appels_traites" AND (activite_cuid="N1 FULL" OR activite="FULL AC") AND (NOT activite_cuid!="N1 FULL" AND NOT activite!="FULL AC")| stats sum(appels_traites) as "Nbre appels" by date_month] [ search index="idx_arv_ach_tracage" (equipe_travail_libelle="*MAROC N1 AC FULL")             | eval date=strftime(_time,"%Y-%m-%d")             | dedup date,code_alliance_conseiller_responsable,num_client             | chart count(theme_libelle) as "Nbre_de_tracagesD" by date_month ] [search index="idx_arv_ach_cas_traces"  source="*ac_at*" (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") (LibEDO="*MAROC N1 AC FULL"OR"*SENEGAL N1 AC FULL*") AND (code_resolution="474"OR"836"OR"2836"OR"2893"OR"3085"OR"3137"OR"3244"OR"4340"OR"4365"OR"4784"OR"5893"OR"5896"OR"5897"OR"5901"OR"5909"OR"5914"OR"6744"OR"7150"OR"8020"OR"8531"OR"8534"OR"8535"OR"8548"OR"8549"OR"8709"OR"8876"OR"8917"OR"8919"OR"8946"OR"8961"OR"8962"OR"8970"OR"8974"OR"8998"OR"8999"OR"9000"OR"9001"OR"9004"OR"9006"OR"9007"OR"9010"OR"9011"OR"9012"OR"9048"OR"9052"OR"9058"OR"9059"OR"9069"OR"9088"OR"9089"OR"9090"OR"9095"OR"9107"OR"9108"OR"9116"OR"9148"OR"9150"OR"9169"OR"9184"OR"9190"OR"9207"OR"9208"OR"9209"OR"9211"OR"9214"OR"9223"OR"9239"OR"9240"OR"9241"OR"9248"OR"9251"OR"9274"OR"92752"OR"9276"OR"9288"OR"9299"OR"9300"OR"9302"OR"9323"OR"9324"OR"9366"OR"9382"OR"9385"OR"9447"OR"9450"OR"9455"OR"9466"OR"9467"OR"9476"OR"9516"OR"9559"OR"9584"OR"9603"OR"9627"OR"9633"OR"9640"OR"9654"OR"9670"OR"9710"OR"9735"OR"9740"OR"9782"OR"9784"OR"9785"OR"9786"OR"9794"OR"9817"OR"9839"OR"9919"OR"9932"OR"10000"OR"10010"OR"10017"OR"10022"OR"10048"OR"10049"OR"10053"OR"10081"OR"10099"OR"10100"OR"10103"OR"10104"OR"10105"OR"10116"OR"10118"OR"10142"OR"10143"OR"10153"OR"10160"OR"10162"OR"10165"OR"10185"OR"10189"OR"10190"OR"10191"OR"10199"OR"10206"OR"10209"OR"10216"OR"10229"OR"10233"OR"10241"OR"10256"OR"10278"OR"10280"OR"10288"OR"10289"OR"10290"OR"10299"OR"10330"OR"10331"OR"10367"OR"10432"OR"10474"OR"10496"OR"10499"OR"10506"OR"10524"OR"10525"OR"10526"OR"10527"OR"10528"OR"10530"OR"10531"OR"10532"OR"10534"OR"10535"OR"10536"OR"10537"OR"10538"OR"10540"OR"10541"OR"10543"OR"10557"OR"10558"OR"10560"OR"10561"OR"10579"OR"10592"OR"10675"OR"10676"OR"10677"OR"10678"OR"10680"OR"10681"OR"10704"OR"10748"OR"10759"OR"10760"OR"10764"OR"10766"OR"10768"OR"10769"OR"10770"OR"10771"OR"10783"OR"10798"OR"10799"OR"10832"OR"10857"OR"10862"OR"10875"OR"10928"OR"10929"OR"10933"OR"10934"OR"10941"OR"10947"OR"10962"OR"10966"OR"10969"OR"10977"OR"10978"OR"11017"OR"11085"OR"11114"OR"11115"OR"11116"OR"11138"OR"11139"OR"11140"OR"11141"OR"11142"OR"11143"OR"11144"OR"11219"OR"11252"OR"11239"OR"11268"OR"11326"OR"11327"OR"11328"OR"11329"OR"11410"OR"11514"OR"11552"OR"11992"OR"12012"OR"12032"OR"12033"OR"12034"OR"12035"OR"12036"OR"12037"OR"12038"OR"12039"OR"12040"OR"12041"OR"12152") | chart sum(total) as "Nbre_de_tracagesB" by date_month ] [search index="idx_arv_ach_cas_traces"  source="*ac_at*" (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") (LibEDO="*MAROC N1 AC FULL"OR"*SENEGAL N1 AC FULL*")| chart sum(total) as "Nbre_de_tracages_total" by date_month] [search index="idx_arv_ach_enquetes_satisfaction"  source="*maroc_base_1*" (conseiller_createur_equipe="*MAROC N1 AC FULL") | chart count(appreciation) as "Nbre_de_retour_enquete" by date_month] [search index="idx_arv_ach_enquetes_satisfaction"  source="*maroc_base_1*" (conseiller_createur_equipe="*MAROC N1 AC FULL") | eval nb5=case(appreciation="5", 5) | eval nb123=case(appreciation>="1" and appreciation<="3", 3) | eval nb1234=case(appreciation>="1" and appreciation<="4", 4)             | eval nbtotal=case(appreciation>="1" and appreciation<="5", 5)| stats count(nb5) as "Nbre_de_5", count(nb123) as "Nbre_de_123", count(nb1234) as "Nbre_de_1234", count(nbtotal) as "Nbre_total" by  date_month             | eval pourcentage=round((Nbre_de_5/Nbre_total-(Nbre_de_123/Nbre_total))*100,2)." %"             | rename pourcentage as deltaSAT | table deltaSAT date_month] |  stats values("Nbre appels") as "Nbre appels" values("Nbre_de_retour_enquete") as "Nbre_de_retour_enquete" values(Nbre_de_tracagesD) as Nbre_de_tracagesD values("Nbre_de_tracagesB") as "Nbre_de_tracagesB" values("Nbre_de_tracages_total") as "Nbre_de_tracages_total" values(deltaSAT) as deltaSAT by date_month             | eval pourcentage=round((Nbre_de_tracagesB/Nbre_de_tracages_total)*100, 2)." %"             | rename pourcentage as "Tx traçages bloquants"             | eval TxTracage=round((Nbre_de_tracagesD/'Nbre appels')*100,2)." %"             | rename TxTracage as "Tx traçages dédoublonnés"             | rename Nbre_de_tracagesD as "Nbre traçages dédoublonnés"             |eval date_month=case(date_month=="january", "01-Janvier", date_month=="february", "02-Février", date_month=="march", "03-Mars", date_month=="april", "04-Avril", date_month=="may", "05-Mai", date_month=="june", "06-Juin", date_month=="july", "07-Juillet", date_month=="august", "08-Août", date_month=="september", "09-Septembre", date_month=="october", "10-Octobre", date_month=="november", "11-Novembre", date_month=="december", "12-Décembre") | sort date_month | eval date_month=substr(date_month, 4)             | fields date_month, "Tx traçages bloquants", "Nbre appels", "Nbre traçages dédoublonnés", "Tx traçages dédoublonnés"             | transpose 15 header_field=date_month   I obtain this result but I have a problem :  I haven't worked on the date_year field and I don't get a table in a chronoligcal order.    Can you help me please ?      
Hello, Is there anyway that I can put the data from all cloudwatch log group under certain paths , I see that we shouldn't use wildcards. As I have many log groups in certain path and in future th... See more...
Hello, Is there anyway that I can put the data from all cloudwatch log group under certain paths , I see that we shouldn't use wildcards. As I have many log groups in certain path and in future the log group will also be added. It would be an operation over head for team to keep adding when there is new log group created. Eg: /aws/lambda/* , /aws/*  Can you guide me here. I am using splunk add on AWS by pull based mechanism.
Hi All, I have a field called summary in my search - Failed backup of the transaction log for SQL Server database 'model' from 'WSQL040Q.tkmaxx.tjxcorp.net\\MSSQLSERVER'. I am creating this search... See more...
Hi All, I have a field called summary in my search - Failed backup of the transaction log for SQL Server database 'model' from 'WSQL040Q.tkmaxx.tjxcorp.net\\MSSQLSERVER'. I am creating this search for service-now alert and I am sending this summary field value under comments in ServiceNow. I need it to break in two lines like this - Line1-Failed backup of the transaction log for SQL Server database 'model' from Line2-'WSQL040Q.tkmaxx.tjxcorp.net\\MSSQLSERVER'.   How do I implement this in my Search? Thanks In Advance!
I am trying to extract some field values that comes in the following format <date>00-mon-year</date> <DisplayName>example</DisplayName> <Hostname>example</Hostname>   When using the spl re... See more...
I am trying to extract some field values that comes in the following format <date>00-mon-year</date> <DisplayName>example</DisplayName> <Hostname>example</Hostname>   When using the spl rex function i am able to extract the fields using  rex field=_raw "Hostname>(?<hostname>.*)<".   Then i have tried to use both a inline regex function in props with the same regex (without quotes) and it does not work. I have also used a transforms.conf with the stanza as follows [hostname] FORMAT = Hostname::$1 REGEX = Hostname>(?<hostname>(.*?))</Hostname   then in the props REPORT -Hostname = hostname   and this does not work.   However, i have another source that pulls the same type of logs and i am able to use inline regex in spl format just fine with no issues. This issue is only specific to this source which i have as [source::/opt/*]   Any ideas on this fix?
  I am looking for specific query where I can alter the row values after the final output and create new column with new value.  For example, I have written the below query : index=csv sourc... See more...
  I am looking for specific query where I can alter the row values after the final output and create new column with new value.  For example, I have written the below query : index=csv sourcetype="miscprocess:csv" source="D:\\automation\\miscprocess\\output_acd.csv" | rex field=_raw "\"(?<filename>\d*\.\d*)\"\,\"(?<filesize>\d*\.\d*)\"\,\"(?<filelocation>\S*)\"" | search filename="*" filesize="*" filelocation IN ("*cl3*", "*cl1*") | table filename, filesize, filelocation Which gives me the following output: filename filesize filelocation 012624.1230 13253.10546875 E:\totalview\ftp\acd\cl1\backup_modified\012624.1230 012624.1230 2236.3291015625 E:\totalview\ftp\acd\cl3\backup\012624.1230 012624.1200 13338.828125 E:\totalview\ftp\acd\cl1\backup_modified\012624.1200 012624.1200 2172.1640625 E:\totalview\ftp\acd\cl3\backup\012624.1200 012624.1130 13292.32421875 E:\totalview\ftp\acd\cl1\backup_modified\012624.1130 012624.1130 2231.9658203125 E:\totalview\ftp\acd\cl3\backup\012624.1130 012624.1100 13438.65234375 E:\totalview\ftp\acd\cl1\backup_modified\012624.1100   BUT, I like the row values to be replaced by "ACD55" where the file location is cl1 and "ACD85" where the file location is cl3 under filelocation column. So the desire output should be: filename filesize filelocation 012624.1230 13253.10546875 ACD55 012624.1230 2236.3291015625 ACD85 012624.1200 13338.828125 ACD55 012624.1200 2172.1640625 ACD85 012624.1130 13292.32421875 ACD55 012624.1130 2231.9658203125 ACD85 012624.1100 13438.65234375 ACD55     The raw events are like below:   "020424.0100","1164.953125","E:\totalview\ftp\acd\cl3\backup\020424.0100" "020624.0130","1754.49609375","E:\totalview\ftp\acd\cl1\backup_modified\020624.0130"   please suggest :
Horizontal Scan:  External scan against a group of IPs for a single port.    Vertical Scan:  External Single IP being scan against multiple port. 
I have the following command to create visualization on choropleth map which works but I get only the  categorical color  and volume as legend (without header),  how can I add more column full countr... See more...
I have the following command to create visualization on choropleth map which works but I get only the  categorical color  and volume as legend (without header),  how can I add more column full country name that I retrieve from geo_attr_countried to the legend and if possible with header of the columns? index=main "registered successfully" | rex "SFTP_OP_(?<country>(\w{2}))" | stats count by country | rename country AS iso2 | lookup geo_attr_countries iso2 OUTPUT country | stats sum by country | rename sum(count) AS Volume | geom geo_countries featureIdField="country" Basically, if possible, I am trying to get a legend something on the right bottom corner like the one below Color Categorical Country Name Volume Color China  124 Color Brazil 25      
Hello Team, hope you are doing well,   - How looks the retention configuration  for 2 years (1 year searchable and 1 year archived) in linux instance. and how it works? (each year has its confi... See more...
Hello Team, hope you are doing well,   - How looks the retention configuration  for 2 years (1 year searchable and 1 year archived) in linux instance. and how it works? (each year has its configuration, how this works). - What are the paths and  instances where those configurations are stored/saved in linux instance. (CLI)? - What link may I use to learn more about retention? Thank you in advance.
Hi everyone, We need quick help overriding ACL for an endpoint from our add-on application. We are making a POST request to endpoint:https://127.0.0.1:8089/servicesNS/nobody/{app_name}/configs/conf-... See more...
Hi everyone, We need quick help overriding ACL for an endpoint from our add-on application. We are making a POST request to endpoint:https://127.0.0.1:8089/servicesNS/nobody/{app_name}/configs/conf-{file_name}, modifying configuration files, but it gives the error: "do not have permission to perform this operation (requires capability: admin_all_objects)".  How do we override this endpoint to use a different capability/role?
Hi,  I installed SA_CIM_Vladiator and when running % checks to see DM coverage I do see gaps between extracted fields or fields that are found on specific indexes and the app does not return them in... See more...
Hi,  I installed SA_CIM_Vladiator and when running % checks to see DM coverage I do see gaps between extracted fields or fields that are found on specific indexes and the app does not return them in the results